Jan 29 11:56:03.944274 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:56:03.944304 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:03.944320 kernel: BIOS-provided physical RAM map: Jan 29 11:56:03.944329 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:56:03.944337 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:56:03.944346 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:56:03.944357 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:56:03.944365 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:56:03.944374 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 29 11:56:03.944382 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 29 11:56:03.944395 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 29 11:56:03.944405 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 29 11:56:03.944419 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 29 11:56:03.944428 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 29 11:56:03.944443 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 29 11:56:03.944452 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:56:03.944466 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 29 11:56:03.944476 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 29 11:56:03.944485 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:56:03.944495 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:56:03.944504 kernel: NX (Execute Disable) protection: active Jan 29 11:56:03.944513 kernel: APIC: Static calls initialized Jan 29 11:56:03.944522 kernel: efi: EFI v2.7 by EDK II Jan 29 11:56:03.944532 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 29 11:56:03.944542 kernel: SMBIOS 2.8 present. Jan 29 11:56:03.944551 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 29 11:56:03.944560 kernel: Hypervisor detected: KVM Jan 29 11:56:03.944574 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:56:03.944583 kernel: kvm-clock: using sched offset of 5276064268 cycles Jan 29 11:56:03.944594 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:56:03.944604 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:56:03.944614 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:56:03.944624 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:56:03.944634 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 29 11:56:03.944643 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:56:03.944653 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:56:03.944668 kernel: Using GB pages for direct mapping Jan 29 11:56:03.944677 kernel: Secure boot disabled Jan 29 11:56:03.944687 kernel: ACPI: Early table checksum verification disabled Jan 29 11:56:03.944697 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 11:56:03.944712 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:56:03.944723 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:03.944733 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:03.944747 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 11:56:03.944757 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:03.944772 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:03.944782 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:03.944793 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:03.944803 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:56:03.944813 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 11:56:03.944828 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 11:56:03.944838 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 11:56:03.944848 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 11:56:03.944858 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 11:56:03.944869 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 11:56:03.944879 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 11:56:03.944889 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 11:56:03.944899 kernel: No NUMA configuration found Jan 29 11:56:03.944913 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 29 11:56:03.944929 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 29 11:56:03.944950 kernel: Zone ranges: Jan 29 11:56:03.944960 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:56:03.944971 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 29 11:56:03.944981 kernel: Normal empty Jan 29 11:56:03.944991 kernel: Movable zone start for each node Jan 29 11:56:03.945001 kernel: Early memory node ranges Jan 29 11:56:03.945011 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:56:03.945021 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 11:56:03.945031 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 11:56:03.945045 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 29 11:56:03.945056 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 29 11:56:03.945066 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 29 11:56:03.945080 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 29 11:56:03.945090 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:56:03.945100 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:56:03.945110 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 11:56:03.945121 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:56:03.945131 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 29 11:56:03.945145 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 29 11:56:03.945180 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 29 11:56:03.945191 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:56:03.945201 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:56:03.945212 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:56:03.945222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:56:03.945232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:56:03.945242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:56:03.945253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:56:03.945267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:56:03.945278 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:56:03.945288 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:56:03.945298 kernel: TSC deadline timer available Jan 29 11:56:03.945309 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:56:03.945319 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:56:03.945329 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:56:03.945339 kernel: kvm-guest: setup PV sched yield Jan 29 11:56:03.945349 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 11:56:03.945363 kernel: Booting paravirtualized kernel on KVM Jan 29 11:56:03.945374 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:56:03.945384 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:56:03.945394 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:56:03.945405 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:56:03.945415 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:56:03.945425 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:56:03.945435 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:56:03.945447 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:03.945465 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:56:03.945476 kernel: random: crng init done Jan 29 11:56:03.945486 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:56:03.945496 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:56:03.945506 kernel: Fallback order for Node 0: 0 Jan 29 11:56:03.945517 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 29 11:56:03.945527 kernel: Policy zone: DMA32 Jan 29 11:56:03.945537 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:56:03.945552 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 29 11:56:03.945563 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:56:03.945573 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:56:03.945583 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:56:03.945593 kernel: Dynamic Preempt: voluntary Jan 29 11:56:03.945614 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:56:03.945629 kernel: rcu: RCU event tracing is enabled. Jan 29 11:56:03.945640 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:56:03.945651 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:56:03.945662 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:56:03.945673 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:56:03.945683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:56:03.945698 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:56:03.945708 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:56:03.945723 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:56:03.945734 kernel: Console: colour dummy device 80x25 Jan 29 11:56:03.945744 kernel: printk: console [ttyS0] enabled Jan 29 11:56:03.945759 kernel: ACPI: Core revision 20230628 Jan 29 11:56:03.945770 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:56:03.945781 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:56:03.945792 kernel: x2apic enabled Jan 29 11:56:03.945802 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:56:03.945814 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:56:03.945827 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:56:03.945838 kernel: kvm-guest: setup PV IPIs Jan 29 11:56:03.945848 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:56:03.945863 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:56:03.945873 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:56:03.945884 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:56:03.945894 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:56:03.945905 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:56:03.945916 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:56:03.945927 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:56:03.945937 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:56:03.945960 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:56:03.945974 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:56:03.945985 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:56:03.945996 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:56:03.946007 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:56:03.946021 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:56:03.946032 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:56:03.946043 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:56:03.946054 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:56:03.946069 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:56:03.946080 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:56:03.946090 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:56:03.946101 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:56:03.946111 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:56:03.946122 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:56:03.946133 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:56:03.946143 kernel: landlock: Up and running. Jan 29 11:56:03.946176 kernel: SELinux: Initializing. Jan 29 11:56:03.946191 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:56:03.946201 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:56:03.947513 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:56:03.947526 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:03.947537 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:03.947548 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:03.947559 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:56:03.947570 kernel: ... version: 0 Jan 29 11:56:03.947581 kernel: ... bit width: 48 Jan 29 11:56:03.947598 kernel: ... generic registers: 6 Jan 29 11:56:03.947609 kernel: ... value mask: 0000ffffffffffff Jan 29 11:56:03.947620 kernel: ... max period: 00007fffffffffff Jan 29 11:56:03.947631 kernel: ... fixed-purpose events: 0 Jan 29 11:56:03.947641 kernel: ... event mask: 000000000000003f Jan 29 11:56:03.947652 kernel: signal: max sigframe size: 1776 Jan 29 11:56:03.947663 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:56:03.947675 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:56:03.947686 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:56:03.947701 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:56:03.947712 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:56:03.947723 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:56:03.947734 kernel: smpboot: Max logical packages: 1 Jan 29 11:56:03.947745 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:56:03.947756 kernel: devtmpfs: initialized Jan 29 11:56:03.947767 kernel: x86/mm: Memory block size: 128MB Jan 29 11:56:03.947778 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 11:56:03.947789 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 11:56:03.947804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 29 11:56:03.947815 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 11:56:03.947826 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 11:56:03.947837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:56:03.947849 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:56:03.947860 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:56:03.947870 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:56:03.947881 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:56:03.947891 kernel: audit: type=2000 audit(1738151762.741:1): state=initialized audit_enabled=0 res=1 Jan 29 11:56:03.947906 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:56:03.947917 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:56:03.947927 kernel: cpuidle: using governor menu Jan 29 11:56:03.947938 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:56:03.947960 kernel: dca service started, version 1.12.1 Jan 29 11:56:03.947971 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:56:03.947981 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:56:03.947992 kernel: PCI: Using configuration type 1 for base access Jan 29 11:56:03.948003 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:56:03.948018 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:56:03.948029 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:56:03.948040 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:56:03.948050 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:56:03.948061 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:56:03.948072 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:56:03.948082 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:56:03.948093 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:56:03.948103 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:56:03.948118 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:56:03.948128 kernel: ACPI: Interpreter enabled Jan 29 11:56:03.948139 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:56:03.948163 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:56:03.948175 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:56:03.948196 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:56:03.948217 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:56:03.948228 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:56:03.948505 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:56:03.948694 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:56:03.948874 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:56:03.948891 kernel: PCI host bridge to bus 0000:00 Jan 29 11:56:03.949094 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:56:03.949270 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:56:03.949424 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:56:03.949584 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:56:03.949757 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:56:03.949929 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 29 11:56:03.950099 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:56:03.950398 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:56:03.950589 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:56:03.950750 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 11:56:03.950913 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 11:56:03.951087 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 11:56:03.951281 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 11:56:03.951456 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:56:03.951652 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:56:03.951832 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 11:56:03.952020 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 11:56:03.952209 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 29 11:56:03.952406 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:56:03.952582 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 11:56:03.952758 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 11:56:03.952950 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 29 11:56:03.953229 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:56:03.953426 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 11:56:03.953594 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 11:56:03.953760 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 29 11:56:03.953924 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 11:56:03.954119 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:56:03.954305 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:56:03.954479 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:56:03.954647 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 11:56:03.954806 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 11:56:03.955008 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:56:03.955209 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 11:56:03.955227 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:56:03.955239 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:56:03.955250 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:56:03.955267 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:56:03.955279 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:56:03.955289 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:56:03.955300 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:56:03.955311 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:56:03.955322 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:56:03.955333 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:56:03.955344 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:56:03.955355 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:56:03.955369 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:56:03.955381 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:56:03.955392 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:56:03.955402 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:56:03.955414 kernel: iommu: Default domain type: Translated Jan 29 11:56:03.955425 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:56:03.955435 kernel: efivars: Registered efivars operations Jan 29 11:56:03.955446 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:56:03.955457 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:56:03.955469 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 11:56:03.955484 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 29 11:56:03.955494 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 29 11:56:03.955505 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 29 11:56:03.955674 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:56:03.955842 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:56:03.956024 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:56:03.956042 kernel: vgaarb: loaded Jan 29 11:56:03.956054 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:56:03.956070 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:56:03.956082 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:56:03.956093 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:56:03.956104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:56:03.956115 kernel: pnp: PnP ACPI init Jan 29 11:56:03.956342 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:56:03.956362 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:56:03.956373 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:56:03.956390 kernel: NET: Registered PF_INET protocol family Jan 29 11:56:03.956402 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:56:03.956414 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:56:03.956425 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:56:03.956437 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:56:03.956449 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:56:03.956461 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:56:03.956472 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:56:03.956484 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:56:03.956501 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:56:03.956513 kernel: NET: Registered PF_XDP protocol family Jan 29 11:56:03.956687 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 11:56:03.956854 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 11:56:03.957029 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:56:03.957246 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:56:03.957407 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:56:03.957564 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:56:03.957728 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:56:03.957882 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 29 11:56:03.957898 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:56:03.957909 kernel: Initialise system trusted keyrings Jan 29 11:56:03.957920 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:56:03.957931 kernel: Key type asymmetric registered Jan 29 11:56:03.957955 kernel: Asymmetric key parser 'x509' registered Jan 29 11:56:03.957966 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:56:03.957977 kernel: io scheduler mq-deadline registered Jan 29 11:56:03.957994 kernel: io scheduler kyber registered Jan 29 11:56:03.958005 kernel: io scheduler bfq registered Jan 29 11:56:03.958016 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:56:03.958028 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:56:03.958039 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:56:03.958050 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:56:03.958061 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:56:03.958072 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:56:03.958083 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:56:03.958097 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:56:03.958108 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:56:03.958304 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:56:03.958321 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:56:03.958466 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:56:03.958621 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:56:03 UTC (1738151763) Jan 29 11:56:03.958786 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:56:03.958803 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:56:03.958821 kernel: efifb: probing for efifb Jan 29 11:56:03.958832 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 29 11:56:03.958842 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 29 11:56:03.958853 kernel: efifb: scrolling: redraw Jan 29 11:56:03.958863 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 29 11:56:03.958873 kernel: Console: switching to colour frame buffer device 100x37 Jan 29 11:56:03.958907 kernel: fb0: EFI VGA frame buffer device Jan 29 11:56:03.958921 kernel: pstore: Using crash dump compression: deflate Jan 29 11:56:03.958933 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:56:03.958959 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:56:03.958970 kernel: Segment Routing with IPv6 Jan 29 11:56:03.958982 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:56:03.958993 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:56:03.959004 kernel: Key type dns_resolver registered Jan 29 11:56:03.959016 kernel: IPI shorthand broadcast: enabled Jan 29 11:56:03.959027 kernel: sched_clock: Marking stable (1130002916, 118700528)->(1278011191, -29307747) Jan 29 11:56:03.959039 kernel: registered taskstats version 1 Jan 29 11:56:03.959051 kernel: Loading compiled-in X.509 certificates Jan 29 11:56:03.959067 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:56:03.959078 kernel: Key type .fscrypt registered Jan 29 11:56:03.959089 kernel: Key type fscrypt-provisioning registered Jan 29 11:56:03.959101 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:56:03.959112 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:56:03.959124 kernel: ima: No architecture policies found Jan 29 11:56:03.959135 kernel: clk: Disabling unused clocks Jan 29 11:56:03.959147 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:56:03.959178 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:56:03.959190 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:56:03.959202 kernel: Run /init as init process Jan 29 11:56:03.959213 kernel: with arguments: Jan 29 11:56:03.959224 kernel: /init Jan 29 11:56:03.959235 kernel: with environment: Jan 29 11:56:03.959246 kernel: HOME=/ Jan 29 11:56:03.959257 kernel: TERM=linux Jan 29 11:56:03.959269 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:56:03.959287 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:56:03.959302 systemd[1]: Detected virtualization kvm. Jan 29 11:56:03.959314 systemd[1]: Detected architecture x86-64. Jan 29 11:56:03.959326 systemd[1]: Running in initrd. Jan 29 11:56:03.959344 systemd[1]: No hostname configured, using default hostname. Jan 29 11:56:03.959356 systemd[1]: Hostname set to . Jan 29 11:56:03.959369 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:56:03.959381 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:56:03.959393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:03.959405 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:03.959418 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:56:03.959431 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:56:03.959447 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:56:03.959460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:56:03.959475 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:56:03.959487 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:56:03.959499 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:03.959511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:56:03.959523 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:56:03.959540 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:56:03.959552 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:56:03.959564 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:56:03.959576 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:56:03.959588 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:56:03.959600 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:56:03.959612 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:56:03.959624 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:03.959637 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:03.959652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:03.959665 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:56:03.959677 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:56:03.959689 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:56:03.959701 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:56:03.959714 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:56:03.959726 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:56:03.959738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:56:03.959754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:03.959766 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:56:03.959778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:03.959791 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:56:03.959804 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:56:03.959820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:03.959832 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:56:03.959874 systemd-journald[192]: Collecting audit messages is disabled. Jan 29 11:56:03.959905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:03.959922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:56:03.959934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:56:03.959955 systemd-journald[192]: Journal started Jan 29 11:56:03.959981 systemd-journald[192]: Runtime Journal (/run/log/journal/bf66b2ebb3c44cd1b0dc9387d178880c) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:56:03.939382 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:56:03.965128 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:56:03.976208 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:56:03.977481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:56:03.982818 kernel: Bridge firewalling registered Jan 29 11:56:03.979532 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:03.979797 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:56:03.983056 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:03.990232 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:56:03.993565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:56:03.994985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:56:04.009275 dracut-cmdline[222]: dracut-dracut-053 Jan 29 11:56:04.009758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:04.013203 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:04.021404 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:56:04.055959 systemd-resolved[240]: Positive Trust Anchors: Jan 29 11:56:04.055980 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:56:04.056011 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:56:04.058752 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 29 11:56:04.060125 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:56:04.067047 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:56:04.112204 kernel: SCSI subsystem initialized Jan 29 11:56:04.122206 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:56:04.135226 kernel: iscsi: registered transport (tcp) Jan 29 11:56:04.157543 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:56:04.157637 kernel: QLogic iSCSI HBA Driver Jan 29 11:56:04.218400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:56:04.227513 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:56:04.256758 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:56:04.256842 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:56:04.257961 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:56:04.306206 kernel: raid6: avx2x4 gen() 21833 MB/s Jan 29 11:56:04.323205 kernel: raid6: avx2x2 gen() 20938 MB/s Jan 29 11:56:04.340558 kernel: raid6: avx2x1 gen() 17592 MB/s Jan 29 11:56:04.340628 kernel: raid6: using algorithm avx2x4 gen() 21833 MB/s Jan 29 11:56:04.358555 kernel: raid6: .... xor() 6043 MB/s, rmw enabled Jan 29 11:56:04.358614 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:56:04.383194 kernel: xor: automatically using best checksumming function avx Jan 29 11:56:04.538195 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:56:04.553402 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:56:04.564519 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:56:04.576453 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 29 11:56:04.581217 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:56:04.624504 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:56:04.639608 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 29 11:56:04.679054 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:56:04.695498 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:56:04.763880 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:04.784372 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:56:04.797799 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:56:04.815094 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:56:04.867561 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:56:04.867785 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:56:04.867798 kernel: GPT:9289727 != 19775487 Jan 29 11:56:04.867808 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:56:04.867819 kernel: GPT:9289727 != 19775487 Jan 29 11:56:04.867829 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:56:04.867839 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:04.816577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:56:04.865004 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:04.866381 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:56:04.878759 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:56:04.879365 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:56:04.887824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:56:04.888104 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:04.895753 kernel: libata version 3.00 loaded. Jan 29 11:56:04.892904 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:04.894206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:56:04.894436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:04.897303 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:04.913468 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:56:04.913498 kernel: AES CTR mode by8 optimization enabled Jan 29 11:56:04.920966 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:56:04.975590 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:56:04.975614 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (466) Jan 29 11:56:04.975630 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:56:04.975934 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:56:04.976135 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jan 29 11:56:04.976170 kernel: scsi host0: ahci Jan 29 11:56:04.976386 kernel: scsi host1: ahci Jan 29 11:56:04.976590 kernel: scsi host2: ahci Jan 29 11:56:04.976819 kernel: scsi host3: ahci Jan 29 11:56:04.977031 kernel: scsi host4: ahci Jan 29 11:56:04.977212 kernel: scsi host5: ahci Jan 29 11:56:04.977375 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 11:56:04.977387 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 11:56:04.977398 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 11:56:04.977408 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 11:56:04.977419 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 11:56:04.977429 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 11:56:04.940859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:04.943590 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:56:04.967117 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:56:04.984554 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:56:04.985030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:04.996728 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:56:05.003116 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:56:05.003237 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:56:05.023529 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:56:05.025135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:05.036804 disk-uuid[568]: Primary Header is updated. Jan 29 11:56:05.036804 disk-uuid[568]: Secondary Entries is updated. Jan 29 11:56:05.036804 disk-uuid[568]: Secondary Header is updated. Jan 29 11:56:05.041227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:05.046194 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:05.061802 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:05.281216 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:05.281324 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:05.289524 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:56:05.289621 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:05.289635 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:05.291183 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:05.291202 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:56:05.292474 kernel: ata3.00: applying bridge limits Jan 29 11:56:05.293193 kernel: ata3.00: configured for UDMA/100 Jan 29 11:56:05.294213 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:56:05.336196 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:56:05.349031 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:56:05.349050 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:56:06.047774 disk-uuid[569]: The operation has completed successfully. Jan 29 11:56:06.049533 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:06.074515 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:56:06.074671 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:56:06.096483 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:56:06.099719 sh[593]: Success Jan 29 11:56:06.114185 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:56:06.148283 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:56:06.157796 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:56:06.161098 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:56:06.173475 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:56:06.173563 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:06.173575 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:56:06.174477 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:56:06.175256 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:56:06.181397 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:56:06.183517 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:56:06.197306 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:56:06.200245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:56:06.209609 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:06.209651 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:06.209662 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:06.213198 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:06.222322 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:56:06.224087 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:06.233462 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:56:06.239432 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:56:06.306106 ignition[688]: Ignition 2.19.0 Jan 29 11:56:06.306117 ignition[688]: Stage: fetch-offline Jan 29 11:56:06.306184 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:06.306202 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:06.306336 ignition[688]: parsed url from cmdline: "" Jan 29 11:56:06.306340 ignition[688]: no config URL provided Jan 29 11:56:06.306347 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:56:06.306358 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:56:06.306389 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 29 11:56:06.306395 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:56:06.316530 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 29 11:56:06.326725 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:56:06.348404 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:56:06.362365 ignition[688]: parsing config with SHA512: 9b453edac986887e5db4ddec9c8ba3d2e8da4240d6fe6c8a1b3c95c6cfd38aa3f1f835747f55651abecf8628d8c6b28b4c18456e3f164ef09caa3760c4e8cc14 Jan 29 11:56:06.367920 unknown[688]: fetched base config from "system" Jan 29 11:56:06.368148 unknown[688]: fetched user config from "qemu" Jan 29 11:56:06.368647 ignition[688]: fetch-offline: fetch-offline passed Jan 29 11:56:06.368732 ignition[688]: Ignition finished successfully Jan 29 11:56:06.371240 systemd-networkd[782]: lo: Link UP Jan 29 11:56:06.371244 systemd-networkd[782]: lo: Gained carrier Jan 29 11:56:06.373288 systemd-networkd[782]: Enumeration completed Jan 29 11:56:06.373405 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:56:06.373749 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:06.373754 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:56:06.374037 systemd[1]: Reached target network.target - Network. Jan 29 11:56:06.374803 systemd-networkd[782]: eth0: Link UP Jan 29 11:56:06.374807 systemd-networkd[782]: eth0: Gained carrier Jan 29 11:56:06.374814 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:06.388410 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:56:06.388726 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:56:06.396357 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:56:06.403211 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:56:06.412107 ignition[785]: Ignition 2.19.0 Jan 29 11:56:06.412118 ignition[785]: Stage: kargs Jan 29 11:56:06.412349 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:06.412361 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:06.413362 ignition[785]: kargs: kargs passed Jan 29 11:56:06.413418 ignition[785]: Ignition finished successfully Jan 29 11:56:06.419695 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:56:06.432306 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:56:06.444703 ignition[794]: Ignition 2.19.0 Jan 29 11:56:06.444715 ignition[794]: Stage: disks Jan 29 11:56:06.444925 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:06.444942 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:06.449181 ignition[794]: disks: disks passed Jan 29 11:56:06.449247 ignition[794]: Ignition finished successfully Jan 29 11:56:06.451758 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:56:06.453024 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:56:06.454912 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:56:06.455118 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:56:06.459232 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:56:06.459425 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:56:06.480285 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:56:06.493934 systemd-resolved[240]: Detected conflict on linux IN A 10.0.0.115 Jan 29 11:56:06.493948 systemd-resolved[240]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jan 29 11:56:06.495355 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:56:06.501629 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:56:06.502785 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:56:06.590200 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:56:06.590855 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:56:06.592601 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:56:06.602277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:56:06.604454 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:56:06.605796 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:56:06.611339 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 29 11:56:06.605852 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:56:06.618392 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:06.618416 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:06.618431 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:06.618446 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:06.605896 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:56:06.615711 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:56:06.619817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:56:06.623331 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:56:06.665560 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:56:06.670328 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:56:06.675833 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:56:06.681271 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:56:06.765210 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:56:06.773243 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:56:06.774844 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:56:06.782195 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:06.799579 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:56:06.851772 ignition[928]: INFO : Ignition 2.19.0 Jan 29 11:56:06.851772 ignition[928]: INFO : Stage: mount Jan 29 11:56:06.853684 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:06.853684 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:06.853684 ignition[928]: INFO : mount: mount passed Jan 29 11:56:06.853684 ignition[928]: INFO : Ignition finished successfully Jan 29 11:56:06.859572 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:56:06.876391 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:56:07.172891 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:56:07.186400 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:56:07.196205 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Jan 29 11:56:07.196236 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:07.196247 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:07.198178 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:07.201168 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:07.202480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:56:07.229351 ignition[955]: INFO : Ignition 2.19.0 Jan 29 11:56:07.229351 ignition[955]: INFO : Stage: files Jan 29 11:56:07.231340 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:07.231340 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:07.231340 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:56:07.234703 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:56:07.236066 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:56:07.240242 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:56:07.241880 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:56:07.243936 unknown[955]: wrote ssh authorized keys file for user: core Jan 29 11:56:07.245460 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:56:07.247290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:56:07.249433 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:56:07.284698 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:56:07.358828 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:56:07.358828 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:07.364091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:56:07.570367 systemd-networkd[782]: eth0: Gained IPv6LL Jan 29 11:56:07.689299 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:56:08.051483 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:08.051483 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:56:08.055935 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:56:08.055935 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:56:08.055935 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:56:08.055935 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:56:08.055935 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:56:08.055935 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:56:08.055935 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:56:08.055935 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:56:08.074449 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:56:08.078912 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:56:08.080780 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:56:08.080780 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:56:08.080780 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:56:08.080780 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:56:08.080780 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:56:08.080780 ignition[955]: INFO : files: files passed Jan 29 11:56:08.080780 ignition[955]: INFO : Ignition finished successfully Jan 29 11:56:08.082122 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:56:08.090307 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:56:08.093523 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:56:08.095394 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:56:08.095503 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:56:08.104886 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:56:08.106682 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:08.106682 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:08.113069 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:08.108178 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:56:08.110134 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:56:08.122283 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:56:08.147123 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:56:08.147269 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:56:08.149672 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:56:08.151717 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:56:08.153785 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:56:08.162289 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:56:08.178931 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:56:08.192285 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:56:08.202086 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:56:08.202250 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:08.205974 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:56:08.207272 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:56:08.207424 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:56:08.212274 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:56:08.212409 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:56:08.215393 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:56:08.216338 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:56:08.219756 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:56:08.220986 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:56:08.224493 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:56:08.225976 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:56:08.229741 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:56:08.232033 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:56:08.233058 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:56:08.233215 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:56:08.236589 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:56:08.237637 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:08.238102 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:56:08.238230 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:08.241980 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:56:08.242110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:56:08.247047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:56:08.247199 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:56:08.248267 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:56:08.251186 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:56:08.255210 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:08.255379 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:56:08.258105 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:56:08.258609 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:56:08.258723 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:56:08.262431 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:56:08.262531 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:56:08.263375 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:56:08.263512 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:56:08.265244 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:56:08.265375 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:56:08.281314 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:56:08.283054 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:56:08.284237 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:56:08.284387 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:08.285559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:56:08.285713 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:56:08.293208 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:56:08.293344 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:56:08.308496 ignition[1009]: INFO : Ignition 2.19.0 Jan 29 11:56:08.308496 ignition[1009]: INFO : Stage: umount Jan 29 11:56:08.310251 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:08.310251 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:08.311802 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:56:08.314148 ignition[1009]: INFO : umount: umount passed Jan 29 11:56:08.315045 ignition[1009]: INFO : Ignition finished successfully Jan 29 11:56:08.317917 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:56:08.318045 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:56:08.320184 systemd[1]: Stopped target network.target - Network. Jan 29 11:56:08.322092 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:56:08.322147 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:56:08.323172 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:56:08.323224 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:56:08.323537 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:56:08.323580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:56:08.327345 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:56:08.327395 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:56:08.330901 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:56:08.333064 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:56:08.338624 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:56:08.338757 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:56:08.341298 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:56:08.341361 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:56:08.347279 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 29 11:56:08.349305 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:56:08.349473 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:56:08.351759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:56:08.351799 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:08.373279 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:56:08.374420 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:56:08.374477 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:56:08.377001 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:56:08.377053 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:08.379369 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:56:08.379418 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:08.381920 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:56:08.397526 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:56:08.397713 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:56:08.399319 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:56:08.399396 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:08.401061 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:56:08.401103 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:08.404044 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:56:08.404095 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:56:08.408190 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:56:08.408243 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:56:08.409357 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:56:08.409408 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:08.432292 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:56:08.432361 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:56:08.432416 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:56:08.435726 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:56:08.435776 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:56:08.437991 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:56:08.438040 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:08.439285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:56:08.439335 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:08.439923 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:56:08.440037 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:56:08.461736 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:56:08.461864 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:56:08.489216 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:56:08.489351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:56:08.491897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:56:08.493958 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:56:08.494016 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:56:08.510308 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:56:08.517823 systemd[1]: Switching root. Jan 29 11:56:08.551105 systemd-journald[192]: Journal stopped Jan 29 11:56:09.708815 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 29 11:56:09.708892 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:56:09.708910 kernel: SELinux: policy capability open_perms=1 Jan 29 11:56:09.708922 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:56:09.708934 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:56:09.708945 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:56:09.708963 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:56:09.708976 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:56:09.708988 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:56:09.708999 kernel: audit: type=1403 audit(1738151768.933:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:56:09.709020 systemd[1]: Successfully loaded SELinux policy in 44.289ms. Jan 29 11:56:09.709041 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.847ms. Jan 29 11:56:09.709054 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:56:09.709067 systemd[1]: Detected virtualization kvm. Jan 29 11:56:09.709079 systemd[1]: Detected architecture x86-64. Jan 29 11:56:09.709092 systemd[1]: Detected first boot. Jan 29 11:56:09.709110 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:56:09.709123 zram_generator::config[1054]: No configuration found. Jan 29 11:56:09.709142 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:56:09.709500 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:56:09.709515 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:56:09.709527 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:56:09.709540 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:56:09.709553 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:56:09.709565 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:56:09.709578 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:56:09.709591 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:56:09.709613 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:56:09.709627 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:56:09.709639 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:56:09.709652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:09.709665 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:09.709678 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:56:09.709690 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:56:09.709703 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:56:09.709715 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:56:09.709731 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:56:09.709743 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:09.709756 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:56:09.709768 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:56:09.709781 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:56:09.709793 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:56:09.709813 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:09.709827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:56:09.709843 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:56:09.709856 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:56:09.709868 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:56:09.709881 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:56:09.709893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:09.709907 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:09.709919 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:09.709931 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:56:09.709944 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:56:09.709959 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:56:09.709972 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:56:09.709984 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:09.709997 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:56:09.710009 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:56:09.710021 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:56:09.710034 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:56:09.710046 systemd[1]: Reached target machines.target - Containers. Jan 29 11:56:09.710062 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:56:09.710075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:09.710087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:56:09.710099 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:56:09.710112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:09.710124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:56:09.710137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:09.710149 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:56:09.710174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:09.710192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:56:09.710204 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:56:09.710216 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:56:09.710228 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:56:09.710241 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:56:09.710252 kernel: fuse: init (API version 7.39) Jan 29 11:56:09.710264 kernel: loop: module loaded Jan 29 11:56:09.710276 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:56:09.710288 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:56:09.710304 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:56:09.710317 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:56:09.710329 kernel: ACPI: bus type drm_connector registered Jan 29 11:56:09.710342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:56:09.710354 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:56:09.710366 systemd[1]: Stopped verity-setup.service. Jan 29 11:56:09.710398 systemd-journald[1124]: Collecting audit messages is disabled. Jan 29 11:56:09.710426 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:09.710439 systemd-journald[1124]: Journal started Jan 29 11:56:09.710461 systemd-journald[1124]: Runtime Journal (/run/log/journal/bf66b2ebb3c44cd1b0dc9387d178880c) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:56:09.474791 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:56:09.489462 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:56:09.489982 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:56:09.712882 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:56:09.713921 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:56:09.715216 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:56:09.716471 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:56:09.717690 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:56:09.718957 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:56:09.720307 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:56:09.721658 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:56:09.723251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:09.724948 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:56:09.725217 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:56:09.726889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:09.727100 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:09.728657 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:56:09.728859 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:56:09.730332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:09.730513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:09.732322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:56:09.732502 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:56:09.733966 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:09.734182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:09.735646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:09.737103 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:56:09.738943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:56:09.754866 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:56:09.775324 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:56:09.777919 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:56:09.779187 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:56:09.779221 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:56:09.781527 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:56:09.784046 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:56:09.787139 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:56:09.788501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:09.791730 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:56:09.796023 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:56:09.799078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:56:09.802372 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:56:09.803568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:56:09.804997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:56:09.808254 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:56:09.810871 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:56:09.815438 systemd-journald[1124]: Time spent on flushing to /var/log/journal/bf66b2ebb3c44cd1b0dc9387d178880c is 22.743ms for 994 entries. Jan 29 11:56:09.815438 systemd-journald[1124]: System Journal (/var/log/journal/bf66b2ebb3c44cd1b0dc9387d178880c) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:56:09.862227 systemd-journald[1124]: Received client request to flush runtime journal. Jan 29 11:56:09.862272 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 11:56:09.814397 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:09.817023 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:56:09.818643 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:56:09.820464 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:56:09.830252 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:56:09.854563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:56:09.856847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:09.858446 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:56:09.860694 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 29 11:56:09.869754 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:56:09.860712 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 29 11:56:09.869337 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:56:09.871503 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:56:09.873288 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:56:09.888423 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:56:09.890619 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:56:09.894270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:56:09.895189 kernel: loop1: detected capacity change from 0 to 140768 Jan 29 11:56:09.897053 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:56:09.924276 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:56:09.937462 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:56:09.940268 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 11:56:09.958697 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 29 11:56:09.959091 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 29 11:56:09.967261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:56:09.976176 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 11:56:09.987194 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 11:56:09.998182 kernel: loop5: detected capacity change from 0 to 210664 Jan 29 11:56:10.003309 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:56:10.004029 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 29 11:56:10.010449 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:56:10.010468 systemd[1]: Reloading... Jan 29 11:56:10.063351 zram_generator::config[1220]: No configuration found. Jan 29 11:56:10.176780 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:56:10.235870 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:56:10.288102 systemd[1]: Reloading finished in 277 ms. Jan 29 11:56:10.327752 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:56:10.329632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:56:10.344512 systemd[1]: Starting ensure-sysext.service... Jan 29 11:56:10.347019 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:56:10.356317 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:56:10.356471 systemd[1]: Reloading... Jan 29 11:56:10.374923 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:56:10.375753 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:56:10.376810 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:56:10.377121 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 29 11:56:10.377379 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 29 11:56:10.381860 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:56:10.381964 systemd-tmpfiles[1260]: Skipping /boot Jan 29 11:56:10.399095 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:56:10.399333 systemd-tmpfiles[1260]: Skipping /boot Jan 29 11:56:10.414301 zram_generator::config[1287]: No configuration found. Jan 29 11:56:10.552121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:56:10.603705 systemd[1]: Reloading finished in 246 ms. Jan 29 11:56:10.623850 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:56:10.637776 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:56:10.645745 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:56:10.648411 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:56:10.651369 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:56:10.655889 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:56:10.660403 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:56:10.665956 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:56:10.674584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:10.675112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:10.677758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:10.681664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:10.686529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:10.688407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:10.693198 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:56:10.694365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:10.695615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:10.695864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:10.697769 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:10.698022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:10.701844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:10.702051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:10.710996 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 29 11:56:10.711147 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:56:10.714141 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:56:10.720309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:10.720683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:10.722644 augenrules[1355]: No rules Jan 29 11:56:10.729540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:10.732262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:10.735853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:10.737240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:10.739485 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:56:10.742678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:10.744478 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:56:10.746469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:10.746657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:10.748672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:10.748880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:10.750639 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:56:10.752497 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:10.752670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:10.766573 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:56:10.769682 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:56:10.777493 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:56:10.784681 systemd[1]: Finished ensure-sysext.service. Jan 29 11:56:10.795382 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:10.795762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:10.807588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:10.812437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:56:10.829352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:10.834387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:10.836450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:10.843175 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1378) Jan 29 11:56:10.846415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:56:10.853618 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:56:10.855418 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:56:10.855460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:10.856391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:10.856633 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:10.860663 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:56:10.860942 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:56:10.862899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:10.863230 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:10.865599 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:10.865860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:10.875956 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:56:10.881819 systemd-resolved[1330]: Positive Trust Anchors: Jan 29 11:56:10.882067 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:56:10.882110 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:56:10.889069 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 29 11:56:10.889730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:56:10.889829 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:56:10.891460 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:56:10.896530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:56:10.903756 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:56:10.914140 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:56:10.924537 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:56:10.939666 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:56:10.948213 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:56:10.958494 systemd-networkd[1402]: lo: Link UP Jan 29 11:56:10.958507 systemd-networkd[1402]: lo: Gained carrier Jan 29 11:56:10.961051 systemd-networkd[1402]: Enumeration completed Jan 29 11:56:10.961212 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:56:10.961853 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:10.961863 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:56:10.962975 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:56:10.964664 systemd-networkd[1402]: eth0: Link UP Jan 29 11:56:10.964669 systemd-networkd[1402]: eth0: Gained carrier Jan 29 11:56:10.964693 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:10.964859 systemd[1]: Reached target network.target - Network. Jan 29 11:56:10.966255 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:56:10.976303 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:56:10.974376 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:56:10.976305 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:56:10.977247 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Jan 29 11:56:10.981318 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:56:10.981387 systemd-timesyncd[1404]: Initial clock synchronization to Wed 2025-01-29 11:56:11.294370 UTC. Jan 29 11:56:11.008024 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 11:56:11.008477 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:56:11.008717 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:56:11.008965 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:56:11.031664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:11.090381 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:56:11.091126 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:11.097340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:11.104269 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:56:11.117839 kernel: kvm_amd: TSC scaling supported Jan 29 11:56:11.117907 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:56:11.117946 kernel: kvm_amd: Nested Paging enabled Jan 29 11:56:11.119754 kernel: kvm_amd: LBR virtualization supported Jan 29 11:56:11.119793 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:56:11.119807 kernel: kvm_amd: Virtual GIF supported Jan 29 11:56:11.142262 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:56:11.176252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:11.178055 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:56:11.190352 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:56:11.201769 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:56:11.233445 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:56:11.235087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:56:11.236255 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:56:11.237465 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:56:11.238814 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:56:11.240353 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:56:11.241837 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:56:11.243249 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:56:11.244681 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:56:11.244716 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:56:11.245716 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:56:11.247430 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:56:11.250304 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:56:11.264515 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:56:11.267091 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:56:11.268923 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:56:11.270169 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:56:11.271247 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:56:11.272308 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:56:11.272346 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:56:11.273742 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:56:11.276097 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:56:11.280219 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:56:11.280898 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:56:11.285544 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:56:11.288663 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:56:11.290132 jq[1443]: false Jan 29 11:56:11.290837 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:56:11.294976 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:56:11.298135 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:56:11.301827 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:56:11.307850 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:56:11.309662 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:56:11.310374 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:56:11.312400 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:56:11.317327 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:56:11.320083 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:56:11.321830 dbus-daemon[1442]: [system] SELinux support is enabled Jan 29 11:56:11.322517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:56:11.331749 extend-filesystems[1444]: Found loop3 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found loop4 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found loop5 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found sr0 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found vda Jan 29 11:56:11.335609 extend-filesystems[1444]: Found vda1 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found vda2 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found vda3 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found usr Jan 29 11:56:11.335609 extend-filesystems[1444]: Found vda4 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found vda6 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found vda7 Jan 29 11:56:11.335609 extend-filesystems[1444]: Found vda9 Jan 29 11:56:11.335609 extend-filesystems[1444]: Checking size of /dev/vda9 Jan 29 11:56:11.335538 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:56:11.352700 update_engine[1454]: I20250129 11:56:11.348955 1454 main.cc:92] Flatcar Update Engine starting Jan 29 11:56:11.352915 jq[1459]: true Jan 29 11:56:11.335824 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:56:11.336308 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:56:11.336609 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:56:11.348076 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:56:11.348400 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:56:11.356206 update_engine[1454]: I20250129 11:56:11.353699 1454 update_check_scheduler.cc:74] Next update check in 5m43s Jan 29 11:56:11.364101 jq[1465]: true Jan 29 11:56:11.373487 extend-filesystems[1444]: Resized partition /dev/vda9 Jan 29 11:56:11.377934 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:56:11.400276 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1375) Jan 29 11:56:11.400345 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:56:11.401603 tar[1463]: linux-amd64/helm Jan 29 11:56:11.385387 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:56:11.385760 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:56:11.387384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:56:11.387408 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:56:11.401188 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:56:11.401366 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:56:11.401387 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:56:11.402321 systemd-logind[1452]: New seat seat0. Jan 29 11:56:11.425458 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:56:11.440796 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:56:11.473245 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:56:11.571583 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:56:11.656224 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:56:11.692335 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:56:11.692665 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:56:11.692665 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:56:11.692665 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:56:11.699831 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jan 29 11:56:11.694948 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:56:11.701687 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:56:11.696239 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:56:11.703881 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:56:11.706470 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:56:11.726540 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:56:11.734575 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:56:11.747286 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:56:11.747665 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:56:11.757563 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:56:11.769250 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:56:11.780728 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:56:11.783721 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:56:11.785294 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:56:11.807173 containerd[1467]: time="2025-01-29T11:56:11.807036877Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:56:11.837641 containerd[1467]: time="2025-01-29T11:56:11.837577604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:11.839892 containerd[1467]: time="2025-01-29T11:56:11.839849196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:11.839892 containerd[1467]: time="2025-01-29T11:56:11.839882874Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:56:11.839997 containerd[1467]: time="2025-01-29T11:56:11.839900386Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:56:11.840160 containerd[1467]: time="2025-01-29T11:56:11.840136341Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:56:11.840211 containerd[1467]: time="2025-01-29T11:56:11.840169843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:11.840298 containerd[1467]: time="2025-01-29T11:56:11.840276388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:11.840339 containerd[1467]: time="2025-01-29T11:56:11.840297439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:11.840580 containerd[1467]: time="2025-01-29T11:56:11.840552862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:11.840580 containerd[1467]: time="2025-01-29T11:56:11.840576631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:11.840648 containerd[1467]: time="2025-01-29T11:56:11.840593163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:11.840648 containerd[1467]: time="2025-01-29T11:56:11.840605803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:11.840738 containerd[1467]: time="2025-01-29T11:56:11.840719480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:11.841052 containerd[1467]: time="2025-01-29T11:56:11.841027406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:11.841221 containerd[1467]: time="2025-01-29T11:56:11.841181737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:11.841259 containerd[1467]: time="2025-01-29T11:56:11.841219581Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:56:11.841364 containerd[1467]: time="2025-01-29T11:56:11.841344835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:56:11.841436 containerd[1467]: time="2025-01-29T11:56:11.841416474Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:56:11.846924 containerd[1467]: time="2025-01-29T11:56:11.846883891Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:56:11.847010 containerd[1467]: time="2025-01-29T11:56:11.846941401Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:56:11.847010 containerd[1467]: time="2025-01-29T11:56:11.846960318Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:56:11.847010 containerd[1467]: time="2025-01-29T11:56:11.846984586Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:56:11.847010 containerd[1467]: time="2025-01-29T11:56:11.847003939Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:56:11.847217 containerd[1467]: time="2025-01-29T11:56:11.847157959Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:56:11.847453 containerd[1467]: time="2025-01-29T11:56:11.847425865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:56:11.847611 containerd[1467]: time="2025-01-29T11:56:11.847571504Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:56:11.847611 containerd[1467]: time="2025-01-29T11:56:11.847596001Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:56:11.847713 containerd[1467]: time="2025-01-29T11:56:11.847611888Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:56:11.847713 containerd[1467]: time="2025-01-29T11:56:11.847628702Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:56:11.847713 containerd[1467]: time="2025-01-29T11:56:11.847643995Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:56:11.847713 containerd[1467]: time="2025-01-29T11:56:11.847665057Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:56:11.847713 containerd[1467]: time="2025-01-29T11:56:11.847681402Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:56:11.847713 containerd[1467]: time="2025-01-29T11:56:11.847708689Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847725847Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847741890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847756819Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847780764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847798036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847814163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847829405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847845917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847861367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.847880 containerd[1467]: time="2025-01-29T11:56:11.847875150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.847896784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.847914431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.847933119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.847949308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.847971483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.847988057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.848008276Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.848033262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.848048035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.848060424Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.848109252Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:56:11.848131 containerd[1467]: time="2025-01-29T11:56:11.848127710Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:56:11.848451 containerd[1467]: time="2025-01-29T11:56:11.848141515Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:56:11.848451 containerd[1467]: time="2025-01-29T11:56:11.848156725Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:56:11.848451 containerd[1467]: time="2025-01-29T11:56:11.848169458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848451 containerd[1467]: time="2025-01-29T11:56:11.848257535Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:56:11.848451 containerd[1467]: time="2025-01-29T11:56:11.848274161Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:56:11.848451 containerd[1467]: time="2025-01-29T11:56:11.848287425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:56:11.848788 containerd[1467]: time="2025-01-29T11:56:11.848705592Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:56:11.849000 containerd[1467]: time="2025-01-29T11:56:11.848790921Z" level=info msg="Connect containerd service" Jan 29 11:56:11.849000 containerd[1467]: time="2025-01-29T11:56:11.848826911Z" level=info msg="using legacy CRI server" Jan 29 11:56:11.849000 containerd[1467]: time="2025-01-29T11:56:11.848836645Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:56:11.850673 containerd[1467]: time="2025-01-29T11:56:11.850628749Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:56:11.851647 containerd[1467]: time="2025-01-29T11:56:11.851557303Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:56:11.851975 containerd[1467]: time="2025-01-29T11:56:11.851818723Z" level=info msg="Start subscribing containerd event" Jan 29 11:56:11.851975 containerd[1467]: time="2025-01-29T11:56:11.851889539Z" level=info msg="Start recovering state" Jan 29 11:56:11.852047 containerd[1467]: time="2025-01-29T11:56:11.852003434Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:56:11.852098 containerd[1467]: time="2025-01-29T11:56:11.852074281Z" level=info msg="Start event monitor" Jan 29 11:56:11.852148 containerd[1467]: time="2025-01-29T11:56:11.852107690Z" level=info msg="Start snapshots syncer" Jan 29 11:56:11.852148 containerd[1467]: time="2025-01-29T11:56:11.852118142Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:56:11.852148 containerd[1467]: time="2025-01-29T11:56:11.852126992Z" level=info msg="Start streaming server" Jan 29 11:56:11.852389 containerd[1467]: time="2025-01-29T11:56:11.852356877Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:56:11.852547 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:56:11.853270 containerd[1467]: time="2025-01-29T11:56:11.853247494Z" level=info msg="containerd successfully booted in 0.047502s" Jan 29 11:56:11.873000 tar[1463]: linux-amd64/LICENSE Jan 29 11:56:11.873104 tar[1463]: linux-amd64/README.md Jan 29 11:56:11.891787 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:56:12.118440 systemd-networkd[1402]: eth0: Gained IPv6LL Jan 29 11:56:12.121796 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:56:12.123700 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:56:12.134510 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:56:12.137690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:12.140530 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:56:12.162628 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:56:12.162906 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:56:12.165231 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:56:12.168012 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:56:12.800788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:12.802715 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:56:12.805336 systemd[1]: Startup finished in 1.274s (kernel) + 5.208s (initrd) + 3.914s (userspace) = 10.398s. Jan 29 11:56:12.806592 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:56:13.276734 kubelet[1555]: E0129 11:56:13.276604 1555 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:56:13.281210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:56:13.281449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:56:17.077568 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:56:17.079146 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:35652.service - OpenSSH per-connection server daemon (10.0.0.1:35652). Jan 29 11:56:17.276379 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 35652 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:17.279228 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:17.290957 systemd-logind[1452]: New session 1 of user core. Jan 29 11:56:17.292854 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:56:17.305687 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:56:17.320963 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:56:17.324876 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:56:17.335617 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:56:17.476998 systemd[1573]: Queued start job for default target default.target. Jan 29 11:56:17.489594 systemd[1573]: Created slice app.slice - User Application Slice. Jan 29 11:56:17.489623 systemd[1573]: Reached target paths.target - Paths. Jan 29 11:56:17.489638 systemd[1573]: Reached target timers.target - Timers. Jan 29 11:56:17.491479 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:56:17.504472 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:56:17.504644 systemd[1573]: Reached target sockets.target - Sockets. Jan 29 11:56:17.504666 systemd[1573]: Reached target basic.target - Basic System. Jan 29 11:56:17.504710 systemd[1573]: Reached target default.target - Main User Target. Jan 29 11:56:17.504760 systemd[1573]: Startup finished in 160ms. Jan 29 11:56:17.505328 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:56:17.507660 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:56:17.583491 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:35666.service - OpenSSH per-connection server daemon (10.0.0.1:35666). Jan 29 11:56:17.614822 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 35666 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:17.616530 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:17.621552 systemd-logind[1452]: New session 2 of user core. Jan 29 11:56:17.631396 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:56:17.689437 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:17.697575 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:35666.service: Deactivated successfully. Jan 29 11:56:17.699922 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:56:17.702050 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:56:17.712446 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:35668.service - OpenSSH per-connection server daemon (10.0.0.1:35668). Jan 29 11:56:17.713534 systemd-logind[1452]: Removed session 2. Jan 29 11:56:17.742377 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 35668 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:17.744205 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:17.748838 systemd-logind[1452]: New session 3 of user core. Jan 29 11:56:17.758312 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:56:17.811260 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:17.822851 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:35668.service: Deactivated successfully. Jan 29 11:56:17.824660 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:56:17.826194 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:56:17.840448 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:35676.service - OpenSSH per-connection server daemon (10.0.0.1:35676). Jan 29 11:56:17.841373 systemd-logind[1452]: Removed session 3. Jan 29 11:56:17.872829 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 35676 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:17.874856 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:17.879103 systemd-logind[1452]: New session 4 of user core. Jan 29 11:56:17.888294 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:56:17.944734 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:17.954870 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:35676.service: Deactivated successfully. Jan 29 11:56:17.956755 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:56:17.958567 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:56:17.959874 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:35692.service - OpenSSH per-connection server daemon (10.0.0.1:35692). Jan 29 11:56:17.960757 systemd-logind[1452]: Removed session 4. Jan 29 11:56:17.991979 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 35692 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:17.993516 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:17.997799 systemd-logind[1452]: New session 5 of user core. Jan 29 11:56:18.007314 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:56:18.066713 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:56:18.067098 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:56:18.085625 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 29 11:56:18.087852 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:18.103148 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:35692.service: Deactivated successfully. Jan 29 11:56:18.105147 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:56:18.107138 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:56:18.114431 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:35696.service - OpenSSH per-connection server daemon (10.0.0.1:35696). Jan 29 11:56:18.115360 systemd-logind[1452]: Removed session 5. Jan 29 11:56:18.142131 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 35696 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:18.143593 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:18.147322 systemd-logind[1452]: New session 6 of user core. Jan 29 11:56:18.161290 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:56:18.216891 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:56:18.217298 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:56:18.221281 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 29 11:56:18.229691 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:56:18.230101 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:56:18.253410 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:56:18.255295 auditctl[1620]: No rules Jan 29 11:56:18.255913 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:56:18.256238 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:56:18.259734 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:56:18.293539 augenrules[1638]: No rules Jan 29 11:56:18.295718 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:56:18.297078 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 29 11:56:18.299284 sshd[1613]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:18.311130 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:35696.service: Deactivated successfully. Jan 29 11:56:18.313254 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:56:18.315009 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:56:18.329524 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:35702.service - OpenSSH per-connection server daemon (10.0.0.1:35702). Jan 29 11:56:18.330585 systemd-logind[1452]: Removed session 6. Jan 29 11:56:18.354951 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 35702 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:18.356435 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:18.360688 systemd-logind[1452]: New session 7 of user core. Jan 29 11:56:18.375313 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:56:18.429555 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:56:18.429942 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:56:18.722418 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:56:18.722581 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:56:18.993451 dockerd[1667]: time="2025-01-29T11:56:18.993258931Z" level=info msg="Starting up" Jan 29 11:56:19.368310 dockerd[1667]: time="2025-01-29T11:56:19.368266366Z" level=info msg="Loading containers: start." Jan 29 11:56:19.490195 kernel: Initializing XFRM netlink socket Jan 29 11:56:19.578153 systemd-networkd[1402]: docker0: Link UP Jan 29 11:56:19.607072 dockerd[1667]: time="2025-01-29T11:56:19.607028351Z" level=info msg="Loading containers: done." Jan 29 11:56:19.622158 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1813753654-merged.mount: Deactivated successfully. Jan 29 11:56:19.624527 dockerd[1667]: time="2025-01-29T11:56:19.624473328Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:56:19.624643 dockerd[1667]: time="2025-01-29T11:56:19.624584358Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:56:19.624748 dockerd[1667]: time="2025-01-29T11:56:19.624721684Z" level=info msg="Daemon has completed initialization" Jan 29 11:56:19.664085 dockerd[1667]: time="2025-01-29T11:56:19.664013769Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:56:19.664295 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:56:20.498816 containerd[1467]: time="2025-01-29T11:56:20.498759923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:56:21.106298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547340500.mount: Deactivated successfully. Jan 29 11:56:22.124184 containerd[1467]: time="2025-01-29T11:56:22.124092140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:22.124913 containerd[1467]: time="2025-01-29T11:56:22.124864877Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:56:22.126090 containerd[1467]: time="2025-01-29T11:56:22.126047001Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:22.128914 containerd[1467]: time="2025-01-29T11:56:22.128879449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:22.129868 containerd[1467]: time="2025-01-29T11:56:22.129833047Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.631025947s" Jan 29 11:56:22.129910 containerd[1467]: time="2025-01-29T11:56:22.129873190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:56:22.151786 containerd[1467]: time="2025-01-29T11:56:22.151734768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:56:23.495609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:56:23.502430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:23.770654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:23.776201 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:56:24.132374 kubelet[1895]: E0129 11:56:24.132188 1895 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:56:24.141292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:56:24.141529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:56:24.471232 containerd[1467]: time="2025-01-29T11:56:24.471038351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:24.472369 containerd[1467]: time="2025-01-29T11:56:24.472306643Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:56:24.473584 containerd[1467]: time="2025-01-29T11:56:24.473539798Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:24.476681 containerd[1467]: time="2025-01-29T11:56:24.476623126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:24.477775 containerd[1467]: time="2025-01-29T11:56:24.477714920Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.325929895s" Jan 29 11:56:24.477775 containerd[1467]: time="2025-01-29T11:56:24.477760729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:56:24.505610 containerd[1467]: time="2025-01-29T11:56:24.505559050Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:56:25.859614 containerd[1467]: time="2025-01-29T11:56:25.859518559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:25.860645 containerd[1467]: time="2025-01-29T11:56:25.860563235Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:56:25.861826 containerd[1467]: time="2025-01-29T11:56:25.861786484Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:25.865099 containerd[1467]: time="2025-01-29T11:56:25.865060804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:25.866317 containerd[1467]: time="2025-01-29T11:56:25.866285948Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.360685515s" Jan 29 11:56:25.866317 containerd[1467]: time="2025-01-29T11:56:25.866313596Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:56:25.891836 containerd[1467]: time="2025-01-29T11:56:25.891767982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:56:26.988183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2071787162.mount: Deactivated successfully. Jan 29 11:56:27.251068 containerd[1467]: time="2025-01-29T11:56:27.250890193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:27.254168 containerd[1467]: time="2025-01-29T11:56:27.254096267Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:56:27.255672 containerd[1467]: time="2025-01-29T11:56:27.255605315Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:27.258010 containerd[1467]: time="2025-01-29T11:56:27.257973732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:27.258959 containerd[1467]: time="2025-01-29T11:56:27.258912664Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.367099593s" Jan 29 11:56:27.259011 containerd[1467]: time="2025-01-29T11:56:27.258960563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:56:27.284212 containerd[1467]: time="2025-01-29T11:56:27.284143438Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:56:27.866960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650040246.mount: Deactivated successfully. Jan 29 11:56:29.350220 containerd[1467]: time="2025-01-29T11:56:29.350130594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:29.351254 containerd[1467]: time="2025-01-29T11:56:29.351134943Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:56:29.352538 containerd[1467]: time="2025-01-29T11:56:29.352494929Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:29.355853 containerd[1467]: time="2025-01-29T11:56:29.355815029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:29.357325 containerd[1467]: time="2025-01-29T11:56:29.357267927Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.073051341s" Jan 29 11:56:29.357388 containerd[1467]: time="2025-01-29T11:56:29.357324381Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:56:29.378506 containerd[1467]: time="2025-01-29T11:56:29.378463777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:56:29.909881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449801433.mount: Deactivated successfully. Jan 29 11:56:29.914849 containerd[1467]: time="2025-01-29T11:56:29.914805137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:29.915730 containerd[1467]: time="2025-01-29T11:56:29.915659586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:56:29.917114 containerd[1467]: time="2025-01-29T11:56:29.917070377Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:29.919475 containerd[1467]: time="2025-01-29T11:56:29.919426416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:29.920062 containerd[1467]: time="2025-01-29T11:56:29.920022452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 541.518368ms" Jan 29 11:56:29.920062 containerd[1467]: time="2025-01-29T11:56:29.920052925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:56:29.945000 containerd[1467]: time="2025-01-29T11:56:29.944926555Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:56:30.494973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144869373.mount: Deactivated successfully. Jan 29 11:56:32.500265 containerd[1467]: time="2025-01-29T11:56:32.500191335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:32.501060 containerd[1467]: time="2025-01-29T11:56:32.500987146Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:56:32.502207 containerd[1467]: time="2025-01-29T11:56:32.502170628Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:32.505762 containerd[1467]: time="2025-01-29T11:56:32.505722723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:56:32.506931 containerd[1467]: time="2025-01-29T11:56:32.506889084Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.561912779s" Jan 29 11:56:32.506931 containerd[1467]: time="2025-01-29T11:56:32.506926432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:56:34.245454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:56:34.257459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:34.426758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:34.432741 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:56:34.628565 kubelet[2125]: E0129 11:56:34.628378 2125 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:56:34.634061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:56:34.634363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:56:35.197138 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:35.207394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:35.226641 systemd[1]: Reloading requested from client PID 2140 ('systemctl') (unit session-7.scope)... Jan 29 11:56:35.226657 systemd[1]: Reloading... Jan 29 11:56:35.325291 zram_generator::config[2179]: No configuration found. Jan 29 11:56:35.591179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:56:35.680056 systemd[1]: Reloading finished in 452 ms. Jan 29 11:56:35.733583 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:56:35.733686 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:56:35.733969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:35.736960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:35.893410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:35.898415 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:56:35.943736 kubelet[2228]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:56:35.943736 kubelet[2228]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:56:35.943736 kubelet[2228]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:56:35.944186 kubelet[2228]: I0129 11:56:35.943805 2228 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:56:36.195291 kubelet[2228]: I0129 11:56:36.195135 2228 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:56:36.195291 kubelet[2228]: I0129 11:56:36.195174 2228 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:56:36.195451 kubelet[2228]: I0129 11:56:36.195358 2228 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:56:36.209915 kubelet[2228]: I0129 11:56:36.209650 2228 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:56:36.210104 kubelet[2228]: E0129 11:56:36.210070 2228 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.224685 kubelet[2228]: I0129 11:56:36.224647 2228 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:56:36.226505 kubelet[2228]: I0129 11:56:36.226452 2228 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:56:36.226688 kubelet[2228]: I0129 11:56:36.226497 2228 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:56:36.227122 kubelet[2228]: I0129 11:56:36.227092 2228 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:56:36.227122 kubelet[2228]: I0129 11:56:36.227112 2228 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:56:36.227313 kubelet[2228]: I0129 11:56:36.227285 2228 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:56:36.227949 kubelet[2228]: I0129 11:56:36.227921 2228 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:56:36.227949 kubelet[2228]: I0129 11:56:36.227943 2228 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:56:36.228004 kubelet[2228]: I0129 11:56:36.227968 2228 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:56:36.228004 kubelet[2228]: I0129 11:56:36.227987 2228 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:56:36.231340 kubelet[2228]: I0129 11:56:36.231309 2228 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:56:36.232337 kubelet[2228]: W0129 11:56:36.232243 2228 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.232337 kubelet[2228]: E0129 11:56:36.232296 2228 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.232337 kubelet[2228]: W0129 11:56:36.232292 2228 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.232447 kubelet[2228]: E0129 11:56:36.232350 2228 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.232706 kubelet[2228]: I0129 11:56:36.232680 2228 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:56:36.232755 kubelet[2228]: W0129 11:56:36.232737 2228 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:56:36.233404 kubelet[2228]: I0129 11:56:36.233368 2228 server.go:1264] "Started kubelet" Jan 29 11:56:36.234788 kubelet[2228]: I0129 11:56:36.234355 2228 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:56:36.234788 kubelet[2228]: I0129 11:56:36.234644 2228 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:56:36.234788 kubelet[2228]: I0129 11:56:36.234726 2228 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:56:36.234788 kubelet[2228]: I0129 11:56:36.234761 2228 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:56:36.235720 kubelet[2228]: I0129 11:56:36.235693 2228 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:56:36.238003 kubelet[2228]: E0129 11:56:36.237888 2228 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f27d97c322f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:56:36.233351009 +0000 UTC m=+0.330739984,LastTimestamp:2025-01-29 11:56:36.233351009 +0000 UTC m=+0.330739984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:56:36.238187 kubelet[2228]: I0129 11:56:36.238052 2228 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:56:36.238187 kubelet[2228]: I0129 11:56:36.238124 2228 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:56:36.238271 kubelet[2228]: I0129 11:56:36.238208 2228 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:56:36.238514 kubelet[2228]: W0129 11:56:36.238427 2228 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.238514 kubelet[2228]: E0129 11:56:36.238465 2228 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.239035 kubelet[2228]: E0129 11:56:36.238959 2228 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:56:36.239035 kubelet[2228]: E0129 11:56:36.238960 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Jan 29 11:56:36.239236 kubelet[2228]: I0129 11:56:36.239204 2228 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:56:36.243141 kubelet[2228]: I0129 11:56:36.242888 2228 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:56:36.243141 kubelet[2228]: I0129 11:56:36.242909 2228 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:56:36.253602 kubelet[2228]: I0129 11:56:36.253450 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:56:36.254843 kubelet[2228]: I0129 11:56:36.254798 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:56:36.254843 kubelet[2228]: I0129 11:56:36.254835 2228 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:56:36.254974 kubelet[2228]: I0129 11:56:36.254861 2228 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:56:36.254974 kubelet[2228]: E0129 11:56:36.254919 2228 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:56:36.259968 kubelet[2228]: W0129 11:56:36.259470 2228 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.259968 kubelet[2228]: E0129 11:56:36.259528 2228 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:36.260408 kubelet[2228]: I0129 11:56:36.260370 2228 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:56:36.260408 kubelet[2228]: I0129 11:56:36.260391 2228 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:56:36.260479 kubelet[2228]: I0129 11:56:36.260412 2228 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:56:36.339490 kubelet[2228]: I0129 11:56:36.339430 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:56:36.339866 kubelet[2228]: E0129 11:56:36.339838 2228 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 11:56:36.355187 kubelet[2228]: E0129 11:56:36.355091 2228 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:56:36.440063 kubelet[2228]: E0129 11:56:36.439990 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Jan 29 11:56:36.541595 kubelet[2228]: I0129 11:56:36.541533 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:56:36.542052 kubelet[2228]: E0129 11:56:36.542002 2228 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 11:56:36.556284 kubelet[2228]: E0129 11:56:36.556206 2228 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:56:36.569044 kubelet[2228]: I0129 11:56:36.568960 2228 policy_none.go:49] "None policy: Start" Jan 29 11:56:36.570219 kubelet[2228]: I0129 11:56:36.570182 2228 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:56:36.570219 kubelet[2228]: I0129 11:56:36.570226 2228 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:56:36.581360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:56:36.594709 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:56:36.597816 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:56:36.608456 kubelet[2228]: I0129 11:56:36.608406 2228 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:56:36.608811 kubelet[2228]: I0129 11:56:36.608731 2228 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:56:36.608977 kubelet[2228]: I0129 11:56:36.608887 2228 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:56:36.610189 kubelet[2228]: E0129 11:56:36.610141 2228 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:56:36.840887 kubelet[2228]: E0129 11:56:36.840702 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Jan 29 11:56:36.943660 kubelet[2228]: I0129 11:56:36.943594 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:56:36.944184 kubelet[2228]: E0129 11:56:36.944131 2228 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 11:56:36.957365 kubelet[2228]: I0129 11:56:36.957274 2228 topology_manager.go:215] "Topology Admit Handler" podUID="cda33c7518c1e65b2ab9c24236d21c44" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:56:36.958534 kubelet[2228]: I0129 11:56:36.958496 2228 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:56:36.959170 kubelet[2228]: I0129 11:56:36.959128 2228 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:56:36.967349 systemd[1]: Created slice kubepods-burstable-podcda33c7518c1e65b2ab9c24236d21c44.slice - libcontainer container kubepods-burstable-podcda33c7518c1e65b2ab9c24236d21c44.slice. Jan 29 11:56:36.983341 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 29 11:56:36.987785 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 29 11:56:37.042401 kubelet[2228]: I0129 11:56:37.042335 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:37.042401 kubelet[2228]: I0129 11:56:37.042389 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:37.042401 kubelet[2228]: I0129 11:56:37.042413 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:56:37.042401 kubelet[2228]: I0129 11:56:37.042434 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:56:37.042701 kubelet[2228]: I0129 11:56:37.042451 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:56:37.042701 kubelet[2228]: I0129 11:56:37.042470 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:37.042701 kubelet[2228]: I0129 11:56:37.042486 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:37.042701 kubelet[2228]: I0129 11:56:37.042554 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:37.042701 kubelet[2228]: I0129 11:56:37.042614 2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:56:37.228150 kubelet[2228]: W0129 11:56:37.227934 2228 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:37.228150 kubelet[2228]: E0129 11:56:37.228038 2228 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:37.280112 kubelet[2228]: E0129 11:56:37.280081 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:37.280654 containerd[1467]: time="2025-01-29T11:56:37.280613626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cda33c7518c1e65b2ab9c24236d21c44,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:37.286854 kubelet[2228]: E0129 11:56:37.286836 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:37.287167 containerd[1467]: time="2025-01-29T11:56:37.287118578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:37.291356 kubelet[2228]: E0129 11:56:37.291334 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:37.291649 containerd[1467]: time="2025-01-29T11:56:37.291621480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:37.375866 kubelet[2228]: W0129 11:56:37.375772 2228 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:37.375866 kubelet[2228]: E0129 11:56:37.375849 2228 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:37.602473 kubelet[2228]: W0129 11:56:37.602385 2228 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:37.602473 kubelet[2228]: E0129 11:56:37.602467 2228 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:37.642456 kubelet[2228]: E0129 11:56:37.642383 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Jan 29 11:56:37.716827 kubelet[2228]: W0129 11:56:37.716703 2228 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:37.716827 kubelet[2228]: E0129 11:56:37.716775 2228 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:37.747303 kubelet[2228]: I0129 11:56:37.747249 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:56:37.747975 kubelet[2228]: E0129 11:56:37.747875 2228 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 11:56:38.397043 kubelet[2228]: E0129 11:56:38.396943 2228 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:56:38.628341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630494850.mount: Deactivated successfully. Jan 29 11:56:38.639920 containerd[1467]: time="2025-01-29T11:56:38.639821034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:38.641174 containerd[1467]: time="2025-01-29T11:56:38.641109847Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:38.642175 containerd[1467]: time="2025-01-29T11:56:38.642102572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:56:38.643168 containerd[1467]: time="2025-01-29T11:56:38.643115826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:38.644022 containerd[1467]: time="2025-01-29T11:56:38.643981076Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:56:38.644979 containerd[1467]: time="2025-01-29T11:56:38.644943883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:56:38.646044 containerd[1467]: time="2025-01-29T11:56:38.645981038Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:38.650030 containerd[1467]: time="2025-01-29T11:56:38.649928867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:56:38.650800 containerd[1467]: time="2025-01-29T11:56:38.650758634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.36356661s" Jan 29 11:56:38.652105 containerd[1467]: time="2025-01-29T11:56:38.652075781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.371388448s" Jan 29 11:56:38.653133 containerd[1467]: time="2025-01-29T11:56:38.653109466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.361435093s" Jan 29 11:56:38.781225 containerd[1467]: time="2025-01-29T11:56:38.781089826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:38.781763 containerd[1467]: time="2025-01-29T11:56:38.781517822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:38.781763 containerd[1467]: time="2025-01-29T11:56:38.781586304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:38.782549 containerd[1467]: time="2025-01-29T11:56:38.781878451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:38.782611 containerd[1467]: time="2025-01-29T11:56:38.781398211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:38.782611 containerd[1467]: time="2025-01-29T11:56:38.781440746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:38.782611 containerd[1467]: time="2025-01-29T11:56:38.781459231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:38.782611 containerd[1467]: time="2025-01-29T11:56:38.781553758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:38.785944 containerd[1467]: time="2025-01-29T11:56:38.785862247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:38.786004 containerd[1467]: time="2025-01-29T11:56:38.785967526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:38.786082 containerd[1467]: time="2025-01-29T11:56:38.786006701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:38.786311 containerd[1467]: time="2025-01-29T11:56:38.786236265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:38.811421 systemd[1]: Started cri-containerd-535b4bb2471226aad710cdcbc57abc0777ab1cdbd3a838e518c88df5e4ee80f6.scope - libcontainer container 535b4bb2471226aad710cdcbc57abc0777ab1cdbd3a838e518c88df5e4ee80f6. Jan 29 11:56:38.813374 systemd[1]: Started cri-containerd-6864ab6eafe9edce857f069a7242d70830456899ac597fb19f84e90ca53399d9.scope - libcontainer container 6864ab6eafe9edce857f069a7242d70830456899ac597fb19f84e90ca53399d9. Jan 29 11:56:38.816628 systemd[1]: Started cri-containerd-8a34cb4a6f71e65601b36fbab4b06e5f9cd008e741c1420c00cb2c91560fbc40.scope - libcontainer container 8a34cb4a6f71e65601b36fbab4b06e5f9cd008e741c1420c00cb2c91560fbc40. Jan 29 11:56:38.861897 containerd[1467]: time="2025-01-29T11:56:38.861773886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"535b4bb2471226aad710cdcbc57abc0777ab1cdbd3a838e518c88df5e4ee80f6\"" Jan 29 11:56:38.865073 kubelet[2228]: E0129 11:56:38.863543 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:38.867919 containerd[1467]: time="2025-01-29T11:56:38.867879711Z" level=info msg="CreateContainer within sandbox \"535b4bb2471226aad710cdcbc57abc0777ab1cdbd3a838e518c88df5e4ee80f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:56:38.869006 containerd[1467]: time="2025-01-29T11:56:38.868961337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cda33c7518c1e65b2ab9c24236d21c44,Namespace:kube-system,Attempt:0,} returns sandbox id \"6864ab6eafe9edce857f069a7242d70830456899ac597fb19f84e90ca53399d9\"" Jan 29 11:56:38.869603 containerd[1467]: time="2025-01-29T11:56:38.869576211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a34cb4a6f71e65601b36fbab4b06e5f9cd008e741c1420c00cb2c91560fbc40\"" Jan 29 11:56:38.870082 kubelet[2228]: E0129 11:56:38.870058 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:38.870626 kubelet[2228]: E0129 11:56:38.870603 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:38.872433 containerd[1467]: time="2025-01-29T11:56:38.872380033Z" level=info msg="CreateContainer within sandbox \"6864ab6eafe9edce857f069a7242d70830456899ac597fb19f84e90ca53399d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:56:38.872683 containerd[1467]: time="2025-01-29T11:56:38.872651731Z" level=info msg="CreateContainer within sandbox \"8a34cb4a6f71e65601b36fbab4b06e5f9cd008e741c1420c00cb2c91560fbc40\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:56:38.919947 containerd[1467]: time="2025-01-29T11:56:38.919831951Z" level=info msg="CreateContainer within sandbox \"6864ab6eafe9edce857f069a7242d70830456899ac597fb19f84e90ca53399d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fd25d8fd2ba21b65de8197299fa42b301f4a951a9b57f525e64bc08f4efe6d54\"" Jan 29 11:56:38.920524 containerd[1467]: time="2025-01-29T11:56:38.920447287Z" level=info msg="StartContainer for \"fd25d8fd2ba21b65de8197299fa42b301f4a951a9b57f525e64bc08f4efe6d54\"" Jan 29 11:56:38.921370 containerd[1467]: time="2025-01-29T11:56:38.921321494Z" level=info msg="CreateContainer within sandbox \"535b4bb2471226aad710cdcbc57abc0777ab1cdbd3a838e518c88df5e4ee80f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a5334325d16f0407b5cd5b097df0b40bf43279da495fb122e9fb5b1f8bd2a53\"" Jan 29 11:56:38.921618 containerd[1467]: time="2025-01-29T11:56:38.921586502Z" level=info msg="StartContainer for \"6a5334325d16f0407b5cd5b097df0b40bf43279da495fb122e9fb5b1f8bd2a53\"" Jan 29 11:56:38.928419 containerd[1467]: time="2025-01-29T11:56:38.928362274Z" level=info msg="CreateContainer within sandbox \"8a34cb4a6f71e65601b36fbab4b06e5f9cd008e741c1420c00cb2c91560fbc40\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de45fd44d846fe09e90834677feae30cc26556c5448b85cb865395efe8c41408\"" Jan 29 11:56:38.929509 containerd[1467]: time="2025-01-29T11:56:38.929369631Z" level=info msg="StartContainer for \"de45fd44d846fe09e90834677feae30cc26556c5448b85cb865395efe8c41408\"" Jan 29 11:56:38.952400 systemd[1]: Started cri-containerd-6a5334325d16f0407b5cd5b097df0b40bf43279da495fb122e9fb5b1f8bd2a53.scope - libcontainer container 6a5334325d16f0407b5cd5b097df0b40bf43279da495fb122e9fb5b1f8bd2a53. Jan 29 11:56:38.956781 systemd[1]: Started cri-containerd-fd25d8fd2ba21b65de8197299fa42b301f4a951a9b57f525e64bc08f4efe6d54.scope - libcontainer container fd25d8fd2ba21b65de8197299fa42b301f4a951a9b57f525e64bc08f4efe6d54. Jan 29 11:56:38.967474 systemd[1]: Started cri-containerd-de45fd44d846fe09e90834677feae30cc26556c5448b85cb865395efe8c41408.scope - libcontainer container de45fd44d846fe09e90834677feae30cc26556c5448b85cb865395efe8c41408. Jan 29 11:56:39.025192 containerd[1467]: time="2025-01-29T11:56:39.024997975Z" level=info msg="StartContainer for \"6a5334325d16f0407b5cd5b097df0b40bf43279da495fb122e9fb5b1f8bd2a53\" returns successfully" Jan 29 11:56:39.027060 containerd[1467]: time="2025-01-29T11:56:39.027023263Z" level=info msg="StartContainer for \"de45fd44d846fe09e90834677feae30cc26556c5448b85cb865395efe8c41408\" returns successfully" Jan 29 11:56:39.027118 containerd[1467]: time="2025-01-29T11:56:39.027086049Z" level=info msg="StartContainer for \"fd25d8fd2ba21b65de8197299fa42b301f4a951a9b57f525e64bc08f4efe6d54\" returns successfully" Jan 29 11:56:39.270990 kubelet[2228]: E0129 11:56:39.270947 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:39.274026 kubelet[2228]: E0129 11:56:39.273995 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:39.276515 kubelet[2228]: E0129 11:56:39.276485 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:39.349732 kubelet[2228]: I0129 11:56:39.349691 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:56:40.189785 kubelet[2228]: E0129 11:56:40.189731 2228 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:56:40.230386 kubelet[2228]: I0129 11:56:40.230302 2228 apiserver.go:52] "Watching apiserver" Jan 29 11:56:40.238499 kubelet[2228]: I0129 11:56:40.238428 2228 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:56:40.247894 kubelet[2228]: E0129 11:56:40.247778 2228 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f27d97c322f61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:56:36.233351009 +0000 UTC m=+0.330739984,LastTimestamp:2025-01-29 11:56:36.233351009 +0000 UTC m=+0.330739984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:56:40.279286 kubelet[2228]: E0129 11:56:40.279238 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:40.279286 kubelet[2228]: E0129 11:56:40.279276 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:40.301666 kubelet[2228]: E0129 11:56:40.301569 2228 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f27d97c879caf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:56:36.238949551 +0000 UTC m=+0.336338526,LastTimestamp:2025-01-29 11:56:36.238949551 +0000 UTC m=+0.336338526,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:56:40.355051 kubelet[2228]: E0129 11:56:40.354923 2228 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f27d97dc552dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:56:36.2597711 +0000 UTC m=+0.357160065,LastTimestamp:2025-01-29 11:56:36.2597711 +0000 UTC m=+0.357160065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:56:40.373299 kubelet[2228]: I0129 11:56:40.373226 2228 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:56:42.376080 systemd[1]: Reloading requested from client PID 2508 ('systemctl') (unit session-7.scope)... Jan 29 11:56:42.376116 systemd[1]: Reloading... Jan 29 11:56:42.469206 zram_generator::config[2547]: No configuration found. Jan 29 11:56:42.610537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:56:42.718649 systemd[1]: Reloading finished in 341 ms. Jan 29 11:56:42.765121 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:42.787146 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:56:42.787610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:42.798721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:42.958708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:42.963841 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:56:43.011648 kubelet[2592]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:56:43.011648 kubelet[2592]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:56:43.011648 kubelet[2592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:56:43.012116 kubelet[2592]: I0129 11:56:43.011692 2592 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:56:43.016823 kubelet[2592]: I0129 11:56:43.016780 2592 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:56:43.016823 kubelet[2592]: I0129 11:56:43.016805 2592 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:56:43.019879 kubelet[2592]: I0129 11:56:43.019829 2592 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:56:43.021250 kubelet[2592]: I0129 11:56:43.021222 2592 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:56:43.022523 kubelet[2592]: I0129 11:56:43.022487 2592 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:56:43.030933 kubelet[2592]: I0129 11:56:43.030910 2592 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:56:43.031264 kubelet[2592]: I0129 11:56:43.031233 2592 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:56:43.031470 kubelet[2592]: I0129 11:56:43.031262 2592 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:56:43.031569 kubelet[2592]: I0129 11:56:43.031492 2592 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:56:43.031569 kubelet[2592]: I0129 11:56:43.031505 2592 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:56:43.031614 kubelet[2592]: I0129 11:56:43.031569 2592 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:56:43.031699 kubelet[2592]: I0129 11:56:43.031686 2592 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:56:43.031719 kubelet[2592]: I0129 11:56:43.031705 2592 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:56:43.031748 kubelet[2592]: I0129 11:56:43.031733 2592 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:56:43.031772 kubelet[2592]: I0129 11:56:43.031759 2592 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:56:43.032449 kubelet[2592]: I0129 11:56:43.032421 2592 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:56:43.032607 kubelet[2592]: I0129 11:56:43.032588 2592 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:56:43.033057 kubelet[2592]: I0129 11:56:43.033039 2592 server.go:1264] "Started kubelet" Jan 29 11:56:43.034452 kubelet[2592]: I0129 11:56:43.033553 2592 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:56:43.034452 kubelet[2592]: I0129 11:56:43.033780 2592 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:56:43.034452 kubelet[2592]: I0129 11:56:43.034051 2592 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:56:43.041203 kubelet[2592]: I0129 11:56:43.038283 2592 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:56:43.041203 kubelet[2592]: E0129 11:56:43.039055 2592 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:56:43.041203 kubelet[2592]: I0129 11:56:43.040445 2592 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:56:43.046832 kubelet[2592]: I0129 11:56:43.044506 2592 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:56:43.046832 kubelet[2592]: I0129 11:56:43.044881 2592 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:56:43.046832 kubelet[2592]: I0129 11:56:43.045098 2592 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:56:43.048574 kubelet[2592]: I0129 11:56:43.048549 2592 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:56:43.048696 kubelet[2592]: I0129 11:56:43.048653 2592 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:56:43.050712 kubelet[2592]: I0129 11:56:43.050691 2592 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:56:43.059815 kubelet[2592]: I0129 11:56:43.059742 2592 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:56:43.061330 kubelet[2592]: I0129 11:56:43.061300 2592 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:56:43.061391 kubelet[2592]: I0129 11:56:43.061338 2592 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:56:43.061391 kubelet[2592]: I0129 11:56:43.061358 2592 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:56:43.061441 kubelet[2592]: E0129 11:56:43.061412 2592 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:56:43.094264 kubelet[2592]: I0129 11:56:43.094231 2592 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:56:43.094264 kubelet[2592]: I0129 11:56:43.094254 2592 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:56:43.094435 kubelet[2592]: I0129 11:56:43.094298 2592 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:56:43.094560 kubelet[2592]: I0129 11:56:43.094509 2592 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:56:43.094560 kubelet[2592]: I0129 11:56:43.094540 2592 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:56:43.094560 kubelet[2592]: I0129 11:56:43.094561 2592 policy_none.go:49] "None policy: Start" Jan 29 11:56:43.095228 kubelet[2592]: I0129 11:56:43.095200 2592 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:56:43.095228 kubelet[2592]: I0129 11:56:43.095223 2592 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:56:43.095383 kubelet[2592]: I0129 11:56:43.095362 2592 state_mem.go:75] "Updated machine memory state" Jan 29 11:56:43.100000 kubelet[2592]: I0129 11:56:43.099975 2592 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:56:43.100206 kubelet[2592]: I0129 11:56:43.100150 2592 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:56:43.100276 kubelet[2592]: I0129 11:56:43.100265 2592 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:56:43.150786 kubelet[2592]: I0129 11:56:43.150750 2592 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:56:43.161251 kubelet[2592]: I0129 11:56:43.161207 2592 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 11:56:43.161372 kubelet[2592]: I0129 11:56:43.161295 2592 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:56:43.161736 kubelet[2592]: I0129 11:56:43.161683 2592 topology_manager.go:215] "Topology Admit Handler" podUID="cda33c7518c1e65b2ab9c24236d21c44" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:56:43.161838 kubelet[2592]: I0129 11:56:43.161794 2592 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:56:43.161838 kubelet[2592]: I0129 11:56:43.161828 2592 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:56:43.246286 kubelet[2592]: I0129 11:56:43.246093 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:56:43.347225 kubelet[2592]: I0129 11:56:43.347083 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:56:43.347225 kubelet[2592]: I0129 11:56:43.347144 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:43.347225 kubelet[2592]: I0129 11:56:43.347179 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:56:43.347225 kubelet[2592]: I0129 11:56:43.347222 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:56:43.347418 kubelet[2592]: I0129 11:56:43.347241 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:43.347418 kubelet[2592]: I0129 11:56:43.347259 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:43.347418 kubelet[2592]: I0129 11:56:43.347274 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:43.347418 kubelet[2592]: I0129 11:56:43.347288 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:56:43.473108 kubelet[2592]: E0129 11:56:43.473064 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:43.473367 kubelet[2592]: E0129 11:56:43.473247 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:43.473602 kubelet[2592]: E0129 11:56:43.473564 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:44.034184 kubelet[2592]: I0129 11:56:44.032647 2592 apiserver.go:52] "Watching apiserver" Jan 29 11:56:44.046040 kubelet[2592]: I0129 11:56:44.045977 2592 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:56:44.071238 kubelet[2592]: E0129 11:56:44.071138 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:44.071381 kubelet[2592]: E0129 11:56:44.071276 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:44.243909 kubelet[2592]: E0129 11:56:44.243438 2592 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:56:44.244183 kubelet[2592]: E0129 11:56:44.244137 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:44.271412 kubelet[2592]: I0129 11:56:44.271330 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.27131112 podStartE2EDuration="1.27131112s" podCreationTimestamp="2025-01-29 11:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:44.268541439 +0000 UTC m=+1.300600312" watchObservedRunningTime="2025-01-29 11:56:44.27131112 +0000 UTC m=+1.303369993" Jan 29 11:56:44.320470 kubelet[2592]: I0129 11:56:44.320273 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.320253288 podStartE2EDuration="1.320253288s" podCreationTimestamp="2025-01-29 11:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:44.295820272 +0000 UTC m=+1.327879266" watchObservedRunningTime="2025-01-29 11:56:44.320253288 +0000 UTC m=+1.352312161" Jan 29 11:56:44.320470 kubelet[2592]: I0129 11:56:44.320445 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.320438788 podStartE2EDuration="1.320438788s" podCreationTimestamp="2025-01-29 11:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:56:44.319833271 +0000 UTC m=+1.351892154" watchObservedRunningTime="2025-01-29 11:56:44.320438788 +0000 UTC m=+1.352497671" Jan 29 11:56:45.073793 kubelet[2592]: E0129 11:56:45.073738 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:46.574599 kubelet[2592]: E0129 11:56:46.574556 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:46.946329 kubelet[2592]: E0129 11:56:46.946187 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:47.989953 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 29 11:56:47.992565 sshd[1646]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:47.997113 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:35702.service: Deactivated successfully. Jan 29 11:56:47.999291 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:56:47.999534 systemd[1]: session-7.scope: Consumed 5.070s CPU time, 194.7M memory peak, 0B memory swap peak. Jan 29 11:56:48.000056 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:56:48.000901 systemd-logind[1452]: Removed session 7. Jan 29 11:56:49.592037 kubelet[2592]: E0129 11:56:49.591968 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:50.081215 kubelet[2592]: E0129 11:56:50.081144 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:56.409695 update_engine[1454]: I20250129 11:56:56.409562 1454 update_attempter.cc:509] Updating boot flags... Jan 29 11:56:56.445641 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2688) Jan 29 11:56:56.500176 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2686) Jan 29 11:56:56.527256 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2686) Jan 29 11:56:56.579716 kubelet[2592]: E0129 11:56:56.579672 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:56.950831 kubelet[2592]: E0129 11:56:56.950790 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:57.824702 kubelet[2592]: I0129 11:56:57.824637 2592 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:56:57.825269 kubelet[2592]: I0129 11:56:57.825206 2592 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:56:57.825312 containerd[1467]: time="2025-01-29T11:56:57.825007975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:56:58.762293 kubelet[2592]: I0129 11:56:58.761888 2592 topology_manager.go:215] "Topology Admit Handler" podUID="1768597f-42df-4546-93a1-56e0fbacf478" podNamespace="kube-system" podName="kube-proxy-wt7jp" Jan 29 11:56:58.770288 systemd[1]: Created slice kubepods-besteffort-pod1768597f_42df_4546_93a1_56e0fbacf478.slice - libcontainer container kubepods-besteffort-pod1768597f_42df_4546_93a1_56e0fbacf478.slice. Jan 29 11:56:58.839989 kubelet[2592]: I0129 11:56:58.839910 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1768597f-42df-4546-93a1-56e0fbacf478-kube-proxy\") pod \"kube-proxy-wt7jp\" (UID: \"1768597f-42df-4546-93a1-56e0fbacf478\") " pod="kube-system/kube-proxy-wt7jp" Jan 29 11:56:58.839989 kubelet[2592]: I0129 11:56:58.839961 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccst\" (UniqueName: \"kubernetes.io/projected/1768597f-42df-4546-93a1-56e0fbacf478-kube-api-access-nccst\") pod \"kube-proxy-wt7jp\" (UID: \"1768597f-42df-4546-93a1-56e0fbacf478\") " pod="kube-system/kube-proxy-wt7jp" Jan 29 11:56:58.839989 kubelet[2592]: I0129 11:56:58.840001 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1768597f-42df-4546-93a1-56e0fbacf478-xtables-lock\") pod \"kube-proxy-wt7jp\" (UID: \"1768597f-42df-4546-93a1-56e0fbacf478\") " pod="kube-system/kube-proxy-wt7jp" Jan 29 11:56:58.840546 kubelet[2592]: I0129 11:56:58.840022 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1768597f-42df-4546-93a1-56e0fbacf478-lib-modules\") pod \"kube-proxy-wt7jp\" (UID: \"1768597f-42df-4546-93a1-56e0fbacf478\") " pod="kube-system/kube-proxy-wt7jp" Jan 29 11:56:59.128084 kubelet[2592]: I0129 11:56:59.127905 2592 topology_manager.go:215] "Topology Admit Handler" podUID="62d049e2-0a2b-4b6d-8fbf-648e12b064ef" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-gtdhn" Jan 29 11:56:59.140634 systemd[1]: Created slice kubepods-besteffort-pod62d049e2_0a2b_4b6d_8fbf_648e12b064ef.slice - libcontainer container kubepods-besteffort-pod62d049e2_0a2b_4b6d_8fbf_648e12b064ef.slice. Jan 29 11:56:59.141604 kubelet[2592]: I0129 11:56:59.141565 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62d049e2-0a2b-4b6d-8fbf-648e12b064ef-var-lib-calico\") pod \"tigera-operator-7bc55997bb-gtdhn\" (UID: \"62d049e2-0a2b-4b6d-8fbf-648e12b064ef\") " pod="tigera-operator/tigera-operator-7bc55997bb-gtdhn" Jan 29 11:56:59.141604 kubelet[2592]: I0129 11:56:59.141605 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ggt\" (UniqueName: \"kubernetes.io/projected/62d049e2-0a2b-4b6d-8fbf-648e12b064ef-kube-api-access-66ggt\") pod \"tigera-operator-7bc55997bb-gtdhn\" (UID: \"62d049e2-0a2b-4b6d-8fbf-648e12b064ef\") " pod="tigera-operator/tigera-operator-7bc55997bb-gtdhn" Jan 29 11:56:59.377933 kubelet[2592]: E0129 11:56:59.377878 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:59.378804 containerd[1467]: time="2025-01-29T11:56:59.378690988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wt7jp,Uid:1768597f-42df-4546-93a1-56e0fbacf478,Namespace:kube-system,Attempt:0,}" Jan 29 11:56:59.443843 containerd[1467]: time="2025-01-29T11:56:59.443787935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gtdhn,Uid:62d049e2-0a2b-4b6d-8fbf-648e12b064ef,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:56:59.455703 containerd[1467]: time="2025-01-29T11:56:59.454793009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:59.455703 containerd[1467]: time="2025-01-29T11:56:59.455649482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:59.455703 containerd[1467]: time="2025-01-29T11:56:59.455663242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:59.456395 containerd[1467]: time="2025-01-29T11:56:59.455761158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:59.481332 systemd[1]: Started cri-containerd-1453e1f462ad8190b92076e46e1b64ccdf9aa8ffa26f5dde94839cf03338bc2f.scope - libcontainer container 1453e1f462ad8190b92076e46e1b64ccdf9aa8ffa26f5dde94839cf03338bc2f. Jan 29 11:56:59.487738 containerd[1467]: time="2025-01-29T11:56:59.487630829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:56:59.487738 containerd[1467]: time="2025-01-29T11:56:59.487687554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:56:59.487738 containerd[1467]: time="2025-01-29T11:56:59.487706565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:59.487930 containerd[1467]: time="2025-01-29T11:56:59.487824966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:56:59.511383 systemd[1]: Started cri-containerd-85def87f25975cfe2b0fe17b92792cd8050d24bd8cdf9f90753a2eb3b6cf2fce.scope - libcontainer container 85def87f25975cfe2b0fe17b92792cd8050d24bd8cdf9f90753a2eb3b6cf2fce. Jan 29 11:56:59.512406 containerd[1467]: time="2025-01-29T11:56:59.512133485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wt7jp,Uid:1768597f-42df-4546-93a1-56e0fbacf478,Namespace:kube-system,Attempt:0,} returns sandbox id \"1453e1f462ad8190b92076e46e1b64ccdf9aa8ffa26f5dde94839cf03338bc2f\"" Jan 29 11:56:59.513060 kubelet[2592]: E0129 11:56:59.512769 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:59.515713 containerd[1467]: time="2025-01-29T11:56:59.515619058Z" level=info msg="CreateContainer within sandbox \"1453e1f462ad8190b92076e46e1b64ccdf9aa8ffa26f5dde94839cf03338bc2f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:56:59.536530 containerd[1467]: time="2025-01-29T11:56:59.536457328Z" level=info msg="CreateContainer within sandbox \"1453e1f462ad8190b92076e46e1b64ccdf9aa8ffa26f5dde94839cf03338bc2f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0fda9cba4ec012e3f3b36f30052fd8428a49d4768a6678fca2fbfd4d888e058b\"" Jan 29 11:56:59.537399 containerd[1467]: time="2025-01-29T11:56:59.537358800Z" level=info msg="StartContainer for \"0fda9cba4ec012e3f3b36f30052fd8428a49d4768a6678fca2fbfd4d888e058b\"" Jan 29 11:56:59.557525 containerd[1467]: time="2025-01-29T11:56:59.555909241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gtdhn,Uid:62d049e2-0a2b-4b6d-8fbf-648e12b064ef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"85def87f25975cfe2b0fe17b92792cd8050d24bd8cdf9f90753a2eb3b6cf2fce\"" Jan 29 11:56:59.558574 containerd[1467]: time="2025-01-29T11:56:59.558533279Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:56:59.572551 systemd[1]: Started cri-containerd-0fda9cba4ec012e3f3b36f30052fd8428a49d4768a6678fca2fbfd4d888e058b.scope - libcontainer container 0fda9cba4ec012e3f3b36f30052fd8428a49d4768a6678fca2fbfd4d888e058b. Jan 29 11:56:59.605759 containerd[1467]: time="2025-01-29T11:56:59.605716386Z" level=info msg="StartContainer for \"0fda9cba4ec012e3f3b36f30052fd8428a49d4768a6678fca2fbfd4d888e058b\" returns successfully" Jan 29 11:57:00.098491 kubelet[2592]: E0129 11:57:00.098238 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:00.107790 kubelet[2592]: I0129 11:57:00.107725 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wt7jp" podStartSLOduration=2.10770566 podStartE2EDuration="2.10770566s" podCreationTimestamp="2025-01-29 11:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:57:00.107072588 +0000 UTC m=+17.139131461" watchObservedRunningTime="2025-01-29 11:57:00.10770566 +0000 UTC m=+17.139764533" Jan 29 11:57:00.132678 systemd[1]: run-containerd-runc-k8s.io-1453e1f462ad8190b92076e46e1b64ccdf9aa8ffa26f5dde94839cf03338bc2f-runc.SitB1B.mount: Deactivated successfully. Jan 29 11:57:01.214108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057222681.mount: Deactivated successfully. Jan 29 11:57:01.694957 containerd[1467]: time="2025-01-29T11:57:01.694871899Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:01.696689 containerd[1467]: time="2025-01-29T11:57:01.696594355Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:57:01.698128 containerd[1467]: time="2025-01-29T11:57:01.698042276Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:01.701053 containerd[1467]: time="2025-01-29T11:57:01.700989821Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:01.701785 containerd[1467]: time="2025-01-29T11:57:01.701735248Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.143162421s" Jan 29 11:57:01.701838 containerd[1467]: time="2025-01-29T11:57:01.701786931Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:57:01.707561 containerd[1467]: time="2025-01-29T11:57:01.707523015Z" level=info msg="CreateContainer within sandbox \"85def87f25975cfe2b0fe17b92792cd8050d24bd8cdf9f90753a2eb3b6cf2fce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:57:01.724527 containerd[1467]: time="2025-01-29T11:57:01.724469999Z" level=info msg="CreateContainer within sandbox \"85def87f25975cfe2b0fe17b92792cd8050d24bd8cdf9f90753a2eb3b6cf2fce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"84a1ed5dbd59900094ca8880b2d2e6278a2b48c350fe0ec54389b8039f74f350\"" Jan 29 11:57:01.725071 containerd[1467]: time="2025-01-29T11:57:01.725031708Z" level=info msg="StartContainer for \"84a1ed5dbd59900094ca8880b2d2e6278a2b48c350fe0ec54389b8039f74f350\"" Jan 29 11:57:01.759339 systemd[1]: Started cri-containerd-84a1ed5dbd59900094ca8880b2d2e6278a2b48c350fe0ec54389b8039f74f350.scope - libcontainer container 84a1ed5dbd59900094ca8880b2d2e6278a2b48c350fe0ec54389b8039f74f350. Jan 29 11:57:01.792244 containerd[1467]: time="2025-01-29T11:57:01.792144282Z" level=info msg="StartContainer for \"84a1ed5dbd59900094ca8880b2d2e6278a2b48c350fe0ec54389b8039f74f350\" returns successfully" Jan 29 11:57:02.114835 kubelet[2592]: I0129 11:57:02.114751 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-gtdhn" podStartSLOduration=1.966397024 podStartE2EDuration="4.114733645s" podCreationTimestamp="2025-01-29 11:56:58 +0000 UTC" firstStartedPulling="2025-01-29 11:56:59.557375634 +0000 UTC m=+16.589434507" lastFinishedPulling="2025-01-29 11:57:01.705712255 +0000 UTC m=+18.737771128" observedRunningTime="2025-01-29 11:57:02.114538805 +0000 UTC m=+19.146597678" watchObservedRunningTime="2025-01-29 11:57:02.114733645 +0000 UTC m=+19.146792518" Jan 29 11:57:05.505403 kubelet[2592]: I0129 11:57:05.504282 2592 topology_manager.go:215] "Topology Admit Handler" podUID="5f307633-9d7e-4a8a-be2d-a5d85644d610" podNamespace="calico-system" podName="calico-typha-6875665d48-5v6wk" Jan 29 11:57:05.526933 systemd[1]: Created slice kubepods-besteffort-pod5f307633_9d7e_4a8a_be2d_a5d85644d610.slice - libcontainer container kubepods-besteffort-pod5f307633_9d7e_4a8a_be2d_a5d85644d610.slice. Jan 29 11:57:05.591771 kubelet[2592]: I0129 11:57:05.591692 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f307633-9d7e-4a8a-be2d-a5d85644d610-tigera-ca-bundle\") pod \"calico-typha-6875665d48-5v6wk\" (UID: \"5f307633-9d7e-4a8a-be2d-a5d85644d610\") " pod="calico-system/calico-typha-6875665d48-5v6wk" Jan 29 11:57:05.591930 kubelet[2592]: I0129 11:57:05.591805 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5f307633-9d7e-4a8a-be2d-a5d85644d610-typha-certs\") pod \"calico-typha-6875665d48-5v6wk\" (UID: \"5f307633-9d7e-4a8a-be2d-a5d85644d610\") " pod="calico-system/calico-typha-6875665d48-5v6wk" Jan 29 11:57:05.591930 kubelet[2592]: I0129 11:57:05.591836 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27zqc\" (UniqueName: \"kubernetes.io/projected/5f307633-9d7e-4a8a-be2d-a5d85644d610-kube-api-access-27zqc\") pod \"calico-typha-6875665d48-5v6wk\" (UID: \"5f307633-9d7e-4a8a-be2d-a5d85644d610\") " pod="calico-system/calico-typha-6875665d48-5v6wk" Jan 29 11:57:05.610894 kubelet[2592]: I0129 11:57:05.610839 2592 topology_manager.go:215] "Topology Admit Handler" podUID="de0a4c73-a2d7-4981-9792-00b9870c9ed7" podNamespace="calico-system" podName="calico-node-9x6fb" Jan 29 11:57:05.618780 systemd[1]: Created slice kubepods-besteffort-podde0a4c73_a2d7_4981_9792_00b9870c9ed7.slice - libcontainer container kubepods-besteffort-podde0a4c73_a2d7_4981_9792_00b9870c9ed7.slice. Jan 29 11:57:05.692459 kubelet[2592]: I0129 11:57:05.692395 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-lib-modules\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692459 kubelet[2592]: I0129 11:57:05.692444 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-var-lib-calico\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692459 kubelet[2592]: I0129 11:57:05.692465 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5vv\" (UniqueName: \"kubernetes.io/projected/de0a4c73-a2d7-4981-9792-00b9870c9ed7-kube-api-access-bf5vv\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692664 kubelet[2592]: I0129 11:57:05.692482 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-var-run-calico\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692664 kubelet[2592]: I0129 11:57:05.692497 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-cni-log-dir\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692664 kubelet[2592]: I0129 11:57:05.692513 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/de0a4c73-a2d7-4981-9792-00b9870c9ed7-node-certs\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692664 kubelet[2592]: I0129 11:57:05.692573 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-cni-net-dir\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692664 kubelet[2592]: I0129 11:57:05.692632 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-flexvol-driver-host\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692792 kubelet[2592]: I0129 11:57:05.692674 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-xtables-lock\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692792 kubelet[2592]: I0129 11:57:05.692693 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-cni-bin-dir\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.692792 kubelet[2592]: I0129 11:57:05.692737 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/de0a4c73-a2d7-4981-9792-00b9870c9ed7-policysync\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.693071 kubelet[2592]: I0129 11:57:05.692995 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de0a4c73-a2d7-4981-9792-00b9870c9ed7-tigera-ca-bundle\") pod \"calico-node-9x6fb\" (UID: \"de0a4c73-a2d7-4981-9792-00b9870c9ed7\") " pod="calico-system/calico-node-9x6fb" Jan 29 11:57:05.802328 kubelet[2592]: E0129 11:57:05.802169 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:05.802328 kubelet[2592]: W0129 11:57:05.802208 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:05.802328 kubelet[2592]: E0129 11:57:05.802245 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:05.802682 kubelet[2592]: E0129 11:57:05.802587 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:05.802682 kubelet[2592]: W0129 11:57:05.802604 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:05.802682 kubelet[2592]: E0129 11:57:05.802613 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:05.840274 kubelet[2592]: E0129 11:57:05.840239 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:05.840791 containerd[1467]: time="2025-01-29T11:57:05.840751007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6875665d48-5v6wk,Uid:5f307633-9d7e-4a8a-be2d-a5d85644d610,Namespace:calico-system,Attempt:0,}" Jan 29 11:57:05.894675 kubelet[2592]: E0129 11:57:05.894636 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:05.894675 kubelet[2592]: W0129 11:57:05.894664 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:05.894760 kubelet[2592]: E0129 11:57:05.894688 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:05.996462 kubelet[2592]: E0129 11:57:05.996411 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:05.996462 kubelet[2592]: W0129 11:57:05.996449 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:05.996607 kubelet[2592]: E0129 11:57:05.996477 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.097140 kubelet[2592]: E0129 11:57:06.097017 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.097140 kubelet[2592]: W0129 11:57:06.097042 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.097140 kubelet[2592]: E0129 11:57:06.097061 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.198260 kubelet[2592]: E0129 11:57:06.198217 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.198260 kubelet[2592]: W0129 11:57:06.198240 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.198260 kubelet[2592]: E0129 11:57:06.198272 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.261008 kubelet[2592]: E0129 11:57:06.260968 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.261008 kubelet[2592]: W0129 11:57:06.260990 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.261008 kubelet[2592]: E0129 11:57:06.261012 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.280095 kubelet[2592]: I0129 11:57:06.280040 2592 topology_manager.go:215] "Topology Admit Handler" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" podNamespace="calico-system" podName="csi-node-driver-jtgqv" Jan 29 11:57:06.280901 kubelet[2592]: E0129 11:57:06.280405 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:06.290002 containerd[1467]: time="2025-01-29T11:57:06.289809305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:06.290002 containerd[1467]: time="2025-01-29T11:57:06.289921672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:06.290616 containerd[1467]: time="2025-01-29T11:57:06.290554047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:06.290734 containerd[1467]: time="2025-01-29T11:57:06.290674542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:06.313352 systemd[1]: Started cri-containerd-9f4ae6a43e57fd83b2fd39513959b4a8f37726841737381b3947b25fba2c2101.scope - libcontainer container 9f4ae6a43e57fd83b2fd39513959b4a8f37726841737381b3947b25fba2c2101. Jan 29 11:57:06.361548 containerd[1467]: time="2025-01-29T11:57:06.361394941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6875665d48-5v6wk,Uid:5f307633-9d7e-4a8a-be2d-a5d85644d610,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f4ae6a43e57fd83b2fd39513959b4a8f37726841737381b3947b25fba2c2101\"" Jan 29 11:57:06.363497 kubelet[2592]: E0129 11:57:06.363460 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:06.372829 containerd[1467]: time="2025-01-29T11:57:06.372549543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:57:06.373302 kubelet[2592]: E0129 11:57:06.373196 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.373568 kubelet[2592]: W0129 11:57:06.373391 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.373568 kubelet[2592]: E0129 11:57:06.373420 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.374180 kubelet[2592]: E0129 11:57:06.374018 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.374180 kubelet[2592]: W0129 11:57:06.374036 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.374180 kubelet[2592]: E0129 11:57:06.374046 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.374819 kubelet[2592]: E0129 11:57:06.374804 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.374943 kubelet[2592]: W0129 11:57:06.374873 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.374943 kubelet[2592]: E0129 11:57:06.374887 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.375722 kubelet[2592]: E0129 11:57:06.375334 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.375949 kubelet[2592]: W0129 11:57:06.375847 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.375949 kubelet[2592]: E0129 11:57:06.375866 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.376475 kubelet[2592]: E0129 11:57:06.376373 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.376475 kubelet[2592]: W0129 11:57:06.376385 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.376475 kubelet[2592]: E0129 11:57:06.376397 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.376902 kubelet[2592]: E0129 11:57:06.376791 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.376902 kubelet[2592]: W0129 11:57:06.376802 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.376902 kubelet[2592]: E0129 11:57:06.376813 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.377252 kubelet[2592]: E0129 11:57:06.377187 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.377252 kubelet[2592]: W0129 11:57:06.377199 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.377252 kubelet[2592]: E0129 11:57:06.377208 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.377696 kubelet[2592]: E0129 11:57:06.377626 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.377696 kubelet[2592]: W0129 11:57:06.377638 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.377696 kubelet[2592]: E0129 11:57:06.377647 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.378410 kubelet[2592]: E0129 11:57:06.378315 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.378410 kubelet[2592]: W0129 11:57:06.378326 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.378410 kubelet[2592]: E0129 11:57:06.378335 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.379466 kubelet[2592]: E0129 11:57:06.379293 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.379466 kubelet[2592]: W0129 11:57:06.379305 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.379466 kubelet[2592]: E0129 11:57:06.379341 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.379734 kubelet[2592]: E0129 11:57:06.379709 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.379836 kubelet[2592]: W0129 11:57:06.379820 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.379941 kubelet[2592]: E0129 11:57:06.379915 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.380438 kubelet[2592]: E0129 11:57:06.380353 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.380438 kubelet[2592]: W0129 11:57:06.380367 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.380438 kubelet[2592]: E0129 11:57:06.380378 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.380861 kubelet[2592]: E0129 11:57:06.380754 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.380861 kubelet[2592]: W0129 11:57:06.380767 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.380861 kubelet[2592]: E0129 11:57:06.380778 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.381361 kubelet[2592]: E0129 11:57:06.381326 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.381432 kubelet[2592]: W0129 11:57:06.381420 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.381480 kubelet[2592]: E0129 11:57:06.381469 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.381726 kubelet[2592]: E0129 11:57:06.381711 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.381891 kubelet[2592]: W0129 11:57:06.381781 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.381891 kubelet[2592]: E0129 11:57:06.381798 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.382070 kubelet[2592]: E0129 11:57:06.382056 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.382149 kubelet[2592]: W0129 11:57:06.382132 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.382376 kubelet[2592]: E0129 11:57:06.382273 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.382738 kubelet[2592]: E0129 11:57:06.382715 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.382855 kubelet[2592]: W0129 11:57:06.382840 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.383004 kubelet[2592]: E0129 11:57:06.382935 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.383300 kubelet[2592]: E0129 11:57:06.383288 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.383399 kubelet[2592]: W0129 11:57:06.383386 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.383486 kubelet[2592]: E0129 11:57:06.383443 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.383920 kubelet[2592]: E0129 11:57:06.383813 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.383920 kubelet[2592]: W0129 11:57:06.383823 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.383920 kubelet[2592]: E0129 11:57:06.383832 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.384170 kubelet[2592]: E0129 11:57:06.384073 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.384170 kubelet[2592]: W0129 11:57:06.384084 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.384170 kubelet[2592]: E0129 11:57:06.384093 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.399659 kubelet[2592]: E0129 11:57:06.399625 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.399949 kubelet[2592]: W0129 11:57:06.399788 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.399949 kubelet[2592]: E0129 11:57:06.399813 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.399949 kubelet[2592]: I0129 11:57:06.399842 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51-registration-dir\") pod \"csi-node-driver-jtgqv\" (UID: \"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51\") " pod="calico-system/csi-node-driver-jtgqv" Jan 29 11:57:06.400171 kubelet[2592]: E0129 11:57:06.400138 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.400252 kubelet[2592]: W0129 11:57:06.400229 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.400252 kubelet[2592]: E0129 11:57:06.400252 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.400333 kubelet[2592]: I0129 11:57:06.400269 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51-varrun\") pod \"csi-node-driver-jtgqv\" (UID: \"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51\") " pod="calico-system/csi-node-driver-jtgqv" Jan 29 11:57:06.400527 kubelet[2592]: E0129 11:57:06.400505 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.400527 kubelet[2592]: W0129 11:57:06.400517 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.400593 kubelet[2592]: E0129 11:57:06.400539 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.400593 kubelet[2592]: I0129 11:57:06.400554 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51-kubelet-dir\") pod \"csi-node-driver-jtgqv\" (UID: \"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51\") " pod="calico-system/csi-node-driver-jtgqv" Jan 29 11:57:06.400839 kubelet[2592]: E0129 11:57:06.400821 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.400839 kubelet[2592]: W0129 11:57:06.400834 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.400923 kubelet[2592]: E0129 11:57:06.400847 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.400923 kubelet[2592]: I0129 11:57:06.400863 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51-socket-dir\") pod \"csi-node-driver-jtgqv\" (UID: \"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51\") " pod="calico-system/csi-node-driver-jtgqv" Jan 29 11:57:06.401165 kubelet[2592]: E0129 11:57:06.401137 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.401210 kubelet[2592]: W0129 11:57:06.401168 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.401210 kubelet[2592]: E0129 11:57:06.401188 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.401423 kubelet[2592]: E0129 11:57:06.401399 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.401423 kubelet[2592]: W0129 11:57:06.401411 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.401510 kubelet[2592]: E0129 11:57:06.401426 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.401710 kubelet[2592]: E0129 11:57:06.401686 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.401710 kubelet[2592]: W0129 11:57:06.401698 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.401777 kubelet[2592]: E0129 11:57:06.401714 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.401941 kubelet[2592]: E0129 11:57:06.401909 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.401941 kubelet[2592]: W0129 11:57:06.401921 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.402027 kubelet[2592]: E0129 11:57:06.401958 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.402177 kubelet[2592]: E0129 11:57:06.402136 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.402177 kubelet[2592]: W0129 11:57:06.402148 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.402350 kubelet[2592]: E0129 11:57:06.402192 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.402638 kubelet[2592]: E0129 11:57:06.402620 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.402638 kubelet[2592]: W0129 11:57:06.402633 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.402719 kubelet[2592]: E0129 11:57:06.402661 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.402719 kubelet[2592]: I0129 11:57:06.402690 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jfwv\" (UniqueName: \"kubernetes.io/projected/9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51-kube-api-access-5jfwv\") pod \"csi-node-driver-jtgqv\" (UID: \"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51\") " pod="calico-system/csi-node-driver-jtgqv" Jan 29 11:57:06.402895 kubelet[2592]: E0129 11:57:06.402877 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.402895 kubelet[2592]: W0129 11:57:06.402889 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.402980 kubelet[2592]: E0129 11:57:06.402919 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.403124 kubelet[2592]: E0129 11:57:06.403106 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.403124 kubelet[2592]: W0129 11:57:06.403118 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.403244 kubelet[2592]: E0129 11:57:06.403129 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.403413 kubelet[2592]: E0129 11:57:06.403395 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.403413 kubelet[2592]: W0129 11:57:06.403409 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.403480 kubelet[2592]: E0129 11:57:06.403424 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.403661 kubelet[2592]: E0129 11:57:06.403643 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.403661 kubelet[2592]: W0129 11:57:06.403657 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.403721 kubelet[2592]: E0129 11:57:06.403668 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.403915 kubelet[2592]: E0129 11:57:06.403892 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.403915 kubelet[2592]: W0129 11:57:06.403902 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.403915 kubelet[2592]: E0129 11:57:06.403911 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.504546 kubelet[2592]: E0129 11:57:06.504499 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.504546 kubelet[2592]: W0129 11:57:06.504525 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.504546 kubelet[2592]: E0129 11:57:06.504545 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.504878 kubelet[2592]: E0129 11:57:06.504770 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.504878 kubelet[2592]: W0129 11:57:06.504778 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.504878 kubelet[2592]: E0129 11:57:06.504786 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.505097 kubelet[2592]: E0129 11:57:06.504976 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.505097 kubelet[2592]: W0129 11:57:06.504985 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.505097 kubelet[2592]: E0129 11:57:06.504994 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.505215 kubelet[2592]: E0129 11:57:06.505190 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.505215 kubelet[2592]: W0129 11:57:06.505206 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.505275 kubelet[2592]: E0129 11:57:06.505221 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.505613 kubelet[2592]: E0129 11:57:06.505561 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.505613 kubelet[2592]: W0129 11:57:06.505577 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.505985 kubelet[2592]: E0129 11:57:06.505616 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.505985 kubelet[2592]: E0129 11:57:06.505878 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.505985 kubelet[2592]: W0129 11:57:06.505888 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.505985 kubelet[2592]: E0129 11:57:06.505901 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.506166 kubelet[2592]: E0129 11:57:06.506131 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.506166 kubelet[2592]: W0129 11:57:06.506145 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.506243 kubelet[2592]: E0129 11:57:06.506226 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.506404 kubelet[2592]: E0129 11:57:06.506378 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.506404 kubelet[2592]: W0129 11:57:06.506390 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.506624 kubelet[2592]: E0129 11:57:06.506443 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.506660 kubelet[2592]: E0129 11:57:06.506633 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.506660 kubelet[2592]: W0129 11:57:06.506650 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.506757 kubelet[2592]: E0129 11:57:06.506736 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.506991 kubelet[2592]: E0129 11:57:06.506967 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.506991 kubelet[2592]: W0129 11:57:06.506980 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.507070 kubelet[2592]: E0129 11:57:06.507023 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.507283 kubelet[2592]: E0129 11:57:06.507258 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.507283 kubelet[2592]: W0129 11:57:06.507276 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.507459 kubelet[2592]: E0129 11:57:06.507390 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.507535 kubelet[2592]: E0129 11:57:06.507514 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.507573 kubelet[2592]: W0129 11:57:06.507543 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.507682 kubelet[2592]: E0129 11:57:06.507645 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.507774 kubelet[2592]: E0129 11:57:06.507757 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.507774 kubelet[2592]: W0129 11:57:06.507768 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.507854 kubelet[2592]: E0129 11:57:06.507833 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.508251 kubelet[2592]: E0129 11:57:06.508236 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.508251 kubelet[2592]: W0129 11:57:06.508249 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.508365 kubelet[2592]: E0129 11:57:06.508348 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.508535 kubelet[2592]: E0129 11:57:06.508485 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.508535 kubelet[2592]: W0129 11:57:06.508510 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.508686 kubelet[2592]: E0129 11:57:06.508661 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.508808 kubelet[2592]: E0129 11:57:06.508793 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.508808 kubelet[2592]: W0129 11:57:06.508805 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.508867 kubelet[2592]: E0129 11:57:06.508826 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.509012 kubelet[2592]: E0129 11:57:06.508980 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.509012 kubelet[2592]: W0129 11:57:06.508990 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.509012 kubelet[2592]: E0129 11:57:06.509002 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.509240 kubelet[2592]: E0129 11:57:06.509221 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.509284 kubelet[2592]: W0129 11:57:06.509242 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.509284 kubelet[2592]: E0129 11:57:06.509256 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.509921 kubelet[2592]: E0129 11:57:06.509899 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.509921 kubelet[2592]: W0129 11:57:06.509913 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.510000 kubelet[2592]: E0129 11:57:06.509926 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.510227 kubelet[2592]: E0129 11:57:06.510198 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.510227 kubelet[2592]: W0129 11:57:06.510211 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.510310 kubelet[2592]: E0129 11:57:06.510298 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.510632 kubelet[2592]: E0129 11:57:06.510479 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.510632 kubelet[2592]: W0129 11:57:06.510503 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.510632 kubelet[2592]: E0129 11:57:06.510591 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.510919 kubelet[2592]: E0129 11:57:06.510891 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.510919 kubelet[2592]: W0129 11:57:06.510904 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.511009 kubelet[2592]: E0129 11:57:06.510935 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.511280 kubelet[2592]: E0129 11:57:06.511265 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.511342 kubelet[2592]: W0129 11:57:06.511323 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.511342 kubelet[2592]: E0129 11:57:06.511349 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.511603 kubelet[2592]: E0129 11:57:06.511586 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.511603 kubelet[2592]: W0129 11:57:06.511598 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.511684 kubelet[2592]: E0129 11:57:06.511608 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.512219 kubelet[2592]: E0129 11:57:06.512179 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.512219 kubelet[2592]: W0129 11:57:06.512200 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.512219 kubelet[2592]: E0129 11:57:06.512212 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.517896 kubelet[2592]: E0129 11:57:06.517866 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:06.517896 kubelet[2592]: W0129 11:57:06.517887 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:06.517896 kubelet[2592]: E0129 11:57:06.517903 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:06.529308 kubelet[2592]: E0129 11:57:06.529271 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:06.529825 containerd[1467]: time="2025-01-29T11:57:06.529789177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9x6fb,Uid:de0a4c73-a2d7-4981-9792-00b9870c9ed7,Namespace:calico-system,Attempt:0,}" Jan 29 11:57:06.606066 containerd[1467]: time="2025-01-29T11:57:06.605804095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:06.606066 containerd[1467]: time="2025-01-29T11:57:06.605885417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:06.606066 containerd[1467]: time="2025-01-29T11:57:06.605901170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:06.606382 containerd[1467]: time="2025-01-29T11:57:06.606017926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:06.628380 systemd[1]: Started cri-containerd-3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540.scope - libcontainer container 3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540. Jan 29 11:57:06.655450 containerd[1467]: time="2025-01-29T11:57:06.655369055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9x6fb,Uid:de0a4c73-a2d7-4981-9792-00b9870c9ed7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540\"" Jan 29 11:57:06.656016 kubelet[2592]: E0129 11:57:06.655905 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:08.062473 kubelet[2592]: E0129 11:57:08.062408 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:08.412963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635539985.mount: Deactivated successfully. Jan 29 11:57:08.847485 containerd[1467]: time="2025-01-29T11:57:08.847391348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:08.851342 containerd[1467]: time="2025-01-29T11:57:08.851261739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 11:57:08.853019 containerd[1467]: time="2025-01-29T11:57:08.852816984Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:08.856182 containerd[1467]: time="2025-01-29T11:57:08.856116459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:08.857297 containerd[1467]: time="2025-01-29T11:57:08.857257969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.484658391s" Jan 29 11:57:08.857347 containerd[1467]: time="2025-01-29T11:57:08.857299546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:57:08.858710 containerd[1467]: time="2025-01-29T11:57:08.858655424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:57:08.872012 containerd[1467]: time="2025-01-29T11:57:08.871879942Z" level=info msg="CreateContainer within sandbox \"9f4ae6a43e57fd83b2fd39513959b4a8f37726841737381b3947b25fba2c2101\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:57:08.890116 containerd[1467]: time="2025-01-29T11:57:08.890062968Z" level=info msg="CreateContainer within sandbox \"9f4ae6a43e57fd83b2fd39513959b4a8f37726841737381b3947b25fba2c2101\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4fb47a72c6336abdf9e1f02afeec9ffc203991cd1fc3c9d538d43cc43e399a9a\"" Jan 29 11:57:08.894755 containerd[1467]: time="2025-01-29T11:57:08.894660811Z" level=info msg="StartContainer for \"4fb47a72c6336abdf9e1f02afeec9ffc203991cd1fc3c9d538d43cc43e399a9a\"" Jan 29 11:57:08.924419 systemd[1]: Started cri-containerd-4fb47a72c6336abdf9e1f02afeec9ffc203991cd1fc3c9d538d43cc43e399a9a.scope - libcontainer container 4fb47a72c6336abdf9e1f02afeec9ffc203991cd1fc3c9d538d43cc43e399a9a. Jan 29 11:57:08.970893 containerd[1467]: time="2025-01-29T11:57:08.970829118Z" level=info msg="StartContainer for \"4fb47a72c6336abdf9e1f02afeec9ffc203991cd1fc3c9d538d43cc43e399a9a\" returns successfully" Jan 29 11:57:09.148822 kubelet[2592]: E0129 11:57:09.146729 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:09.173887 kubelet[2592]: I0129 11:57:09.173702 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6875665d48-5v6wk" podStartSLOduration=1.680248602 podStartE2EDuration="4.173686024s" podCreationTimestamp="2025-01-29 11:57:05 +0000 UTC" firstStartedPulling="2025-01-29 11:57:06.364672551 +0000 UTC m=+23.396731424" lastFinishedPulling="2025-01-29 11:57:08.858109972 +0000 UTC m=+25.890168846" observedRunningTime="2025-01-29 11:57:09.173426323 +0000 UTC m=+26.205485196" watchObservedRunningTime="2025-01-29 11:57:09.173686024 +0000 UTC m=+26.205744897" Jan 29 11:57:09.203932 kubelet[2592]: E0129 11:57:09.203888 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.203932 kubelet[2592]: W0129 11:57:09.203916 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.203932 kubelet[2592]: E0129 11:57:09.203941 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.204489 kubelet[2592]: E0129 11:57:09.204424 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.204489 kubelet[2592]: W0129 11:57:09.204461 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.204489 kubelet[2592]: E0129 11:57:09.204503 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.205174 kubelet[2592]: E0129 11:57:09.204950 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.205174 kubelet[2592]: W0129 11:57:09.204979 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.205174 kubelet[2592]: E0129 11:57:09.205010 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.205428 kubelet[2592]: E0129 11:57:09.205371 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.205428 kubelet[2592]: W0129 11:57:09.205394 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.205428 kubelet[2592]: E0129 11:57:09.205411 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.205875 kubelet[2592]: E0129 11:57:09.205806 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.205875 kubelet[2592]: W0129 11:57:09.205836 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.205875 kubelet[2592]: E0129 11:57:09.205849 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.206099 kubelet[2592]: E0129 11:57:09.206074 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.206099 kubelet[2592]: W0129 11:57:09.206082 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.206099 kubelet[2592]: E0129 11:57:09.206091 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.206398 kubelet[2592]: E0129 11:57:09.206371 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.206398 kubelet[2592]: W0129 11:57:09.206383 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.206398 kubelet[2592]: E0129 11:57:09.206393 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.206655 kubelet[2592]: E0129 11:57:09.206637 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.206691 kubelet[2592]: W0129 11:57:09.206656 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.206691 kubelet[2592]: E0129 11:57:09.206673 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.207182 kubelet[2592]: E0129 11:57:09.206992 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.207182 kubelet[2592]: W0129 11:57:09.207009 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.207182 kubelet[2592]: E0129 11:57:09.207021 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.207337 kubelet[2592]: E0129 11:57:09.207300 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.207396 kubelet[2592]: W0129 11:57:09.207336 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.207396 kubelet[2592]: E0129 11:57:09.207369 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.207871 kubelet[2592]: E0129 11:57:09.207846 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.207871 kubelet[2592]: W0129 11:57:09.207864 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.207964 kubelet[2592]: E0129 11:57:09.207877 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.208195 kubelet[2592]: E0129 11:57:09.208147 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.208195 kubelet[2592]: W0129 11:57:09.208193 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.208267 kubelet[2592]: E0129 11:57:09.208209 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.208572 kubelet[2592]: E0129 11:57:09.208533 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.208572 kubelet[2592]: W0129 11:57:09.208554 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.208572 kubelet[2592]: E0129 11:57:09.208564 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.208861 kubelet[2592]: E0129 11:57:09.208844 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.208861 kubelet[2592]: W0129 11:57:09.208859 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.208938 kubelet[2592]: E0129 11:57:09.208869 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.209873 kubelet[2592]: E0129 11:57:09.209851 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.209873 kubelet[2592]: W0129 11:57:09.209869 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.209948 kubelet[2592]: E0129 11:57:09.209884 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.226845 kubelet[2592]: E0129 11:57:09.226794 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.226845 kubelet[2592]: W0129 11:57:09.226821 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.226845 kubelet[2592]: E0129 11:57:09.226841 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.227135 kubelet[2592]: E0129 11:57:09.227111 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.227135 kubelet[2592]: W0129 11:57:09.227124 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.227209 kubelet[2592]: E0129 11:57:09.227138 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.227551 kubelet[2592]: E0129 11:57:09.227495 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.227551 kubelet[2592]: W0129 11:57:09.227545 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.227621 kubelet[2592]: E0129 11:57:09.227574 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.227870 kubelet[2592]: E0129 11:57:09.227848 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.227870 kubelet[2592]: W0129 11:57:09.227861 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.227918 kubelet[2592]: E0129 11:57:09.227875 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.228099 kubelet[2592]: E0129 11:57:09.228078 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.228099 kubelet[2592]: W0129 11:57:09.228090 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.228167 kubelet[2592]: E0129 11:57:09.228103 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.228342 kubelet[2592]: E0129 11:57:09.228325 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.228342 kubelet[2592]: W0129 11:57:09.228340 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.228447 kubelet[2592]: E0129 11:57:09.228404 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.228643 kubelet[2592]: E0129 11:57:09.228625 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.228643 kubelet[2592]: W0129 11:57:09.228639 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.228698 kubelet[2592]: E0129 11:57:09.228680 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.228990 kubelet[2592]: E0129 11:57:09.228959 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.228990 kubelet[2592]: W0129 11:57:09.228976 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.228990 kubelet[2592]: E0129 11:57:09.228997 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.229280 kubelet[2592]: E0129 11:57:09.229260 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.229280 kubelet[2592]: W0129 11:57:09.229277 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.229347 kubelet[2592]: E0129 11:57:09.229296 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.229547 kubelet[2592]: E0129 11:57:09.229530 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.229547 kubelet[2592]: W0129 11:57:09.229544 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.229597 kubelet[2592]: E0129 11:57:09.229560 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.229809 kubelet[2592]: E0129 11:57:09.229792 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.229809 kubelet[2592]: W0129 11:57:09.229804 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.229868 kubelet[2592]: E0129 11:57:09.229819 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.230071 kubelet[2592]: E0129 11:57:09.230052 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.230103 kubelet[2592]: W0129 11:57:09.230071 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.230103 kubelet[2592]: E0129 11:57:09.230090 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.230409 kubelet[2592]: E0129 11:57:09.230391 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.230409 kubelet[2592]: W0129 11:57:09.230404 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.230469 kubelet[2592]: E0129 11:57:09.230420 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.230693 kubelet[2592]: E0129 11:57:09.230675 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.230693 kubelet[2592]: W0129 11:57:09.230691 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.230741 kubelet[2592]: E0129 11:57:09.230710 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.230963 kubelet[2592]: E0129 11:57:09.230947 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.230963 kubelet[2592]: W0129 11:57:09.230960 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.231017 kubelet[2592]: E0129 11:57:09.230974 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.231204 kubelet[2592]: E0129 11:57:09.231189 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.231204 kubelet[2592]: W0129 11:57:09.231202 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.231264 kubelet[2592]: E0129 11:57:09.231214 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.231452 kubelet[2592]: E0129 11:57:09.231434 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.231452 kubelet[2592]: W0129 11:57:09.231447 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.231520 kubelet[2592]: E0129 11:57:09.231461 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:09.231679 kubelet[2592]: E0129 11:57:09.231664 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:09.231679 kubelet[2592]: W0129 11:57:09.231676 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:09.231724 kubelet[2592]: E0129 11:57:09.231684 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.061995 kubelet[2592]: E0129 11:57:10.061918 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:10.132363 kubelet[2592]: I0129 11:57:10.132324 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:57:10.132899 kubelet[2592]: E0129 11:57:10.132868 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:10.218691 kubelet[2592]: E0129 11:57:10.218599 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.218691 kubelet[2592]: W0129 11:57:10.218634 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.218691 kubelet[2592]: E0129 11:57:10.218659 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.219288 kubelet[2592]: E0129 11:57:10.218964 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.219288 kubelet[2592]: W0129 11:57:10.218973 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.219288 kubelet[2592]: E0129 11:57:10.218983 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.219288 kubelet[2592]: E0129 11:57:10.219260 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.219288 kubelet[2592]: W0129 11:57:10.219271 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.219288 kubelet[2592]: E0129 11:57:10.219280 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.219713 kubelet[2592]: E0129 11:57:10.219613 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.219713 kubelet[2592]: W0129 11:57:10.219662 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.219793 kubelet[2592]: E0129 11:57:10.219741 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.220187 kubelet[2592]: E0129 11:57:10.220170 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.220187 kubelet[2592]: W0129 11:57:10.220185 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.220266 kubelet[2592]: E0129 11:57:10.220198 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.220660 kubelet[2592]: E0129 11:57:10.220640 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.220660 kubelet[2592]: W0129 11:57:10.220656 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.220748 kubelet[2592]: E0129 11:57:10.220668 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.221261 kubelet[2592]: E0129 11:57:10.220989 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.221261 kubelet[2592]: W0129 11:57:10.221003 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.221261 kubelet[2592]: E0129 11:57:10.221014 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.221360 kubelet[2592]: E0129 11:57:10.221313 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.221360 kubelet[2592]: W0129 11:57:10.221324 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.221360 kubelet[2592]: E0129 11:57:10.221335 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.221780 kubelet[2592]: E0129 11:57:10.221757 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.221780 kubelet[2592]: W0129 11:57:10.221776 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.221837 kubelet[2592]: E0129 11:57:10.221790 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.222144 kubelet[2592]: E0129 11:57:10.222126 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.222200 kubelet[2592]: W0129 11:57:10.222144 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.222200 kubelet[2592]: E0129 11:57:10.222172 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.222493 kubelet[2592]: E0129 11:57:10.222465 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.222493 kubelet[2592]: W0129 11:57:10.222482 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.222493 kubelet[2592]: E0129 11:57:10.222494 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.222783 kubelet[2592]: E0129 11:57:10.222766 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.222783 kubelet[2592]: W0129 11:57:10.222780 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.222859 kubelet[2592]: E0129 11:57:10.222792 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.223143 kubelet[2592]: E0129 11:57:10.223016 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.223143 kubelet[2592]: W0129 11:57:10.223033 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.223143 kubelet[2592]: E0129 11:57:10.223044 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.223391 kubelet[2592]: E0129 11:57:10.223353 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.223428 kubelet[2592]: W0129 11:57:10.223393 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.223454 kubelet[2592]: E0129 11:57:10.223426 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.223796 kubelet[2592]: E0129 11:57:10.223770 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.223861 kubelet[2592]: W0129 11:57:10.223790 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.223861 kubelet[2592]: E0129 11:57:10.223816 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.234316 kubelet[2592]: E0129 11:57:10.234272 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.234316 kubelet[2592]: W0129 11:57:10.234297 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.234316 kubelet[2592]: E0129 11:57:10.234315 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.234763 kubelet[2592]: E0129 11:57:10.234643 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.234763 kubelet[2592]: W0129 11:57:10.234658 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.234763 kubelet[2592]: E0129 11:57:10.234676 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.235014 kubelet[2592]: E0129 11:57:10.234977 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.235100 kubelet[2592]: W0129 11:57:10.235026 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.235100 kubelet[2592]: E0129 11:57:10.235053 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.235670 kubelet[2592]: E0129 11:57:10.235637 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.235670 kubelet[2592]: W0129 11:57:10.235656 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.235766 kubelet[2592]: E0129 11:57:10.235673 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.235985 kubelet[2592]: E0129 11:57:10.235964 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.235985 kubelet[2592]: W0129 11:57:10.235978 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.236078 kubelet[2592]: E0129 11:57:10.236040 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.236292 kubelet[2592]: E0129 11:57:10.236274 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.236292 kubelet[2592]: W0129 11:57:10.236287 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.236359 kubelet[2592]: E0129 11:57:10.236325 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.237274 kubelet[2592]: E0129 11:57:10.236533 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.237274 kubelet[2592]: W0129 11:57:10.236547 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.237274 kubelet[2592]: E0129 11:57:10.236627 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.237274 kubelet[2592]: E0129 11:57:10.236833 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.237274 kubelet[2592]: W0129 11:57:10.236844 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.237274 kubelet[2592]: E0129 11:57:10.236860 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.237274 kubelet[2592]: E0129 11:57:10.237094 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.237274 kubelet[2592]: W0129 11:57:10.237103 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.237274 kubelet[2592]: E0129 11:57:10.237118 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.237540 kubelet[2592]: E0129 11:57:10.237395 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.237540 kubelet[2592]: W0129 11:57:10.237407 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.237540 kubelet[2592]: E0129 11:57:10.237424 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.237636 kubelet[2592]: E0129 11:57:10.237613 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.237636 kubelet[2592]: W0129 11:57:10.237629 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.237694 kubelet[2592]: E0129 11:57:10.237644 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.237879 kubelet[2592]: E0129 11:57:10.237858 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.237879 kubelet[2592]: W0129 11:57:10.237874 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.237935 kubelet[2592]: E0129 11:57:10.237901 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.238086 kubelet[2592]: E0129 11:57:10.238067 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.238086 kubelet[2592]: W0129 11:57:10.238082 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.238209 kubelet[2592]: E0129 11:57:10.238187 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.238350 kubelet[2592]: E0129 11:57:10.238324 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.238350 kubelet[2592]: W0129 11:57:10.238338 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.238481 kubelet[2592]: E0129 11:57:10.238395 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.238720 kubelet[2592]: E0129 11:57:10.238698 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.238720 kubelet[2592]: W0129 11:57:10.238712 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.238810 kubelet[2592]: E0129 11:57:10.238727 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.239149 kubelet[2592]: E0129 11:57:10.238992 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.239149 kubelet[2592]: W0129 11:57:10.239007 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.239149 kubelet[2592]: E0129 11:57:10.239018 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.239372 kubelet[2592]: E0129 11:57:10.239355 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.239372 kubelet[2592]: W0129 11:57:10.239368 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.239445 kubelet[2592]: E0129 11:57:10.239378 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.239800 kubelet[2592]: E0129 11:57:10.239783 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:10.239800 kubelet[2592]: W0129 11:57:10.239797 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:10.239858 kubelet[2592]: E0129 11:57:10.239807 2592 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:10.284499 containerd[1467]: time="2025-01-29T11:57:10.284443266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:10.285465 containerd[1467]: time="2025-01-29T11:57:10.285420055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 11:57:10.286881 containerd[1467]: time="2025-01-29T11:57:10.286844192Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:10.289354 containerd[1467]: time="2025-01-29T11:57:10.289306786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:10.289884 containerd[1467]: time="2025-01-29T11:57:10.289837088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.43113715s" Jan 29 11:57:10.289884 containerd[1467]: time="2025-01-29T11:57:10.289878203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:57:10.292794 containerd[1467]: time="2025-01-29T11:57:10.292755960Z" level=info msg="CreateContainer within sandbox \"3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:57:10.686507 containerd[1467]: time="2025-01-29T11:57:10.686417289Z" level=info msg="CreateContainer within sandbox \"3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49\"" Jan 29 11:57:10.687305 containerd[1467]: time="2025-01-29T11:57:10.687260660Z" level=info msg="StartContainer for \"26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49\"" Jan 29 11:57:10.724379 systemd[1]: Started cri-containerd-26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49.scope - libcontainer container 26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49. Jan 29 11:57:10.897437 systemd[1]: cri-containerd-26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49.scope: Deactivated successfully. Jan 29 11:57:11.336243 containerd[1467]: time="2025-01-29T11:57:11.336179430Z" level=info msg="StartContainer for \"26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49\" returns successfully" Jan 29 11:57:11.338820 kubelet[2592]: E0129 11:57:11.338795 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:11.363816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49-rootfs.mount: Deactivated successfully. Jan 29 11:57:11.371226 containerd[1467]: time="2025-01-29T11:57:11.368129636Z" level=info msg="shim disconnected" id=26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49 namespace=k8s.io Jan 29 11:57:11.371226 containerd[1467]: time="2025-01-29T11:57:11.371229070Z" level=warning msg="cleaning up after shim disconnected" id=26565cd689a42d33bbfc84519dd4df3400be279558be78005047337f95799f49 namespace=k8s.io Jan 29 11:57:11.371226 containerd[1467]: time="2025-01-29T11:57:11.371243440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:57:12.062226 kubelet[2592]: E0129 11:57:12.062143 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:12.251094 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:49722.service - OpenSSH per-connection server daemon (10.0.0.1:49722). Jan 29 11:57:12.290968 sshd[3360]: Accepted publickey for core from 10.0.0.1 port 49722 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:12.292993 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:12.297646 systemd-logind[1452]: New session 8 of user core. Jan 29 11:57:12.307318 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:57:12.341664 kubelet[2592]: E0129 11:57:12.341516 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:12.342147 containerd[1467]: time="2025-01-29T11:57:12.342037932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:57:12.422745 sshd[3360]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:12.427500 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:49722.service: Deactivated successfully. Jan 29 11:57:12.429679 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:57:12.430691 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:57:12.431952 systemd-logind[1452]: Removed session 8. Jan 29 11:57:14.062528 kubelet[2592]: E0129 11:57:14.062404 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:16.062560 kubelet[2592]: E0129 11:57:16.062501 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:17.196647 containerd[1467]: time="2025-01-29T11:57:17.196589182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:17.197632 containerd[1467]: time="2025-01-29T11:57:17.197553895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:57:17.198924 containerd[1467]: time="2025-01-29T11:57:17.198890973Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:17.204454 containerd[1467]: time="2025-01-29T11:57:17.204407805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:17.205379 containerd[1467]: time="2025-01-29T11:57:17.205325962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.863245212s" Jan 29 11:57:17.205432 containerd[1467]: time="2025-01-29T11:57:17.205377758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:57:17.208014 containerd[1467]: time="2025-01-29T11:57:17.207981143Z" level=info msg="CreateContainer within sandbox \"3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:57:17.223437 containerd[1467]: time="2025-01-29T11:57:17.223389853Z" level=info msg="CreateContainer within sandbox \"3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918\"" Jan 29 11:57:17.224121 containerd[1467]: time="2025-01-29T11:57:17.224038162Z" level=info msg="StartContainer for \"00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918\"" Jan 29 11:57:17.269433 systemd[1]: Started cri-containerd-00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918.scope - libcontainer container 00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918. Jan 29 11:57:17.435917 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:49724.service - OpenSSH per-connection server daemon (10.0.0.1:49724). Jan 29 11:57:17.545661 containerd[1467]: time="2025-01-29T11:57:17.545572510Z" level=info msg="StartContainer for \"00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918\" returns successfully" Jan 29 11:57:17.569399 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 49724 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:17.571366 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:17.575539 systemd-logind[1452]: New session 9 of user core. Jan 29 11:57:17.582303 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:57:17.701965 sshd[3420]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:17.706720 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:49724.service: Deactivated successfully. Jan 29 11:57:17.709475 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:57:17.710430 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:57:17.711476 systemd-logind[1452]: Removed session 9. Jan 29 11:57:18.062191 kubelet[2592]: E0129 11:57:18.062103 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:18.549247 kubelet[2592]: E0129 11:57:18.549206 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:19.080309 systemd[1]: cri-containerd-00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918.scope: Deactivated successfully. Jan 29 11:57:19.095195 kubelet[2592]: I0129 11:57:19.095104 2592 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:57:19.105208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918-rootfs.mount: Deactivated successfully. Jan 29 11:57:19.116216 containerd[1467]: time="2025-01-29T11:57:19.116026237Z" level=info msg="shim disconnected" id=00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918 namespace=k8s.io Jan 29 11:57:19.116216 containerd[1467]: time="2025-01-29T11:57:19.116117762Z" level=warning msg="cleaning up after shim disconnected" id=00145ca09ca133a54a909fedc0abe33c11a46fb1961c559b3941b427dcf07918 namespace=k8s.io Jan 29 11:57:19.116216 containerd[1467]: time="2025-01-29T11:57:19.116132412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:57:19.126049 kubelet[2592]: I0129 11:57:19.125483 2592 topology_manager.go:215] "Topology Admit Handler" podUID="47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5ltr8" Jan 29 11:57:19.136494 kubelet[2592]: I0129 11:57:19.135585 2592 topology_manager.go:215] "Topology Admit Handler" podUID="2e64f57b-151d-456c-8ce1-abd59806b192" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q8nvd" Jan 29 11:57:19.138321 kubelet[2592]: I0129 11:57:19.138065 2592 topology_manager.go:215] "Topology Admit Handler" podUID="7fcf1af3-18c6-4f40-a2a4-51e333c1c84a" podNamespace="calico-apiserver" podName="calico-apiserver-67b8685944-q45lw" Jan 29 11:57:19.139840 kubelet[2592]: I0129 11:57:19.139813 2592 topology_manager.go:215] "Topology Admit Handler" podUID="e2c92084-f55b-4220-b802-d9b21f1f159e" podNamespace="calico-apiserver" podName="calico-apiserver-67b8685944-5dgzn" Jan 29 11:57:19.141502 kubelet[2592]: I0129 11:57:19.141004 2592 topology_manager.go:215] "Topology Admit Handler" podUID="34001644-e4ba-464a-bc44-9457515a4f0a" podNamespace="calico-system" podName="calico-kube-controllers-65f599f856-ftjvl" Jan 29 11:57:19.147409 systemd[1]: Created slice kubepods-burstable-pod47b1acd2_0c77_4c5d_aa4f_cd4e87a15eb7.slice - libcontainer container kubepods-burstable-pod47b1acd2_0c77_4c5d_aa4f_cd4e87a15eb7.slice. Jan 29 11:57:19.154504 systemd[1]: Created slice kubepods-burstable-pod2e64f57b_151d_456c_8ce1_abd59806b192.slice - libcontainer container kubepods-burstable-pod2e64f57b_151d_456c_8ce1_abd59806b192.slice. Jan 29 11:57:19.163995 systemd[1]: Created slice kubepods-besteffort-pode2c92084_f55b_4220_b802_d9b21f1f159e.slice - libcontainer container kubepods-besteffort-pode2c92084_f55b_4220_b802_d9b21f1f159e.slice. Jan 29 11:57:19.169369 systemd[1]: Created slice kubepods-besteffort-pod7fcf1af3_18c6_4f40_a2a4_51e333c1c84a.slice - libcontainer container kubepods-besteffort-pod7fcf1af3_18c6_4f40_a2a4_51e333c1c84a.slice. Jan 29 11:57:19.176641 systemd[1]: Created slice kubepods-besteffort-pod34001644_e4ba_464a_bc44_9457515a4f0a.slice - libcontainer container kubepods-besteffort-pod34001644_e4ba_464a_bc44_9457515a4f0a.slice. Jan 29 11:57:19.311180 kubelet[2592]: I0129 11:57:19.311117 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp52d\" (UniqueName: \"kubernetes.io/projected/e2c92084-f55b-4220-b802-d9b21f1f159e-kube-api-access-cp52d\") pod \"calico-apiserver-67b8685944-5dgzn\" (UID: \"e2c92084-f55b-4220-b802-d9b21f1f159e\") " pod="calico-apiserver/calico-apiserver-67b8685944-5dgzn" Jan 29 11:57:19.311369 kubelet[2592]: I0129 11:57:19.311197 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e64f57b-151d-456c-8ce1-abd59806b192-config-volume\") pod \"coredns-7db6d8ff4d-q8nvd\" (UID: \"2e64f57b-151d-456c-8ce1-abd59806b192\") " pod="kube-system/coredns-7db6d8ff4d-q8nvd" Jan 29 11:57:19.311369 kubelet[2592]: I0129 11:57:19.311225 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24mwf\" (UniqueName: \"kubernetes.io/projected/7fcf1af3-18c6-4f40-a2a4-51e333c1c84a-kube-api-access-24mwf\") pod \"calico-apiserver-67b8685944-q45lw\" (UID: \"7fcf1af3-18c6-4f40-a2a4-51e333c1c84a\") " pod="calico-apiserver/calico-apiserver-67b8685944-q45lw" Jan 29 11:57:19.311369 kubelet[2592]: I0129 11:57:19.311263 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfqvp\" (UniqueName: \"kubernetes.io/projected/34001644-e4ba-464a-bc44-9457515a4f0a-kube-api-access-nfqvp\") pod \"calico-kube-controllers-65f599f856-ftjvl\" (UID: \"34001644-e4ba-464a-bc44-9457515a4f0a\") " pod="calico-system/calico-kube-controllers-65f599f856-ftjvl" Jan 29 11:57:19.311369 kubelet[2592]: I0129 11:57:19.311312 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxtdl\" (UniqueName: \"kubernetes.io/projected/47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7-kube-api-access-wxtdl\") pod \"coredns-7db6d8ff4d-5ltr8\" (UID: \"47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7\") " pod="kube-system/coredns-7db6d8ff4d-5ltr8" Jan 29 11:57:19.311369 kubelet[2592]: I0129 11:57:19.311332 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34001644-e4ba-464a-bc44-9457515a4f0a-tigera-ca-bundle\") pod \"calico-kube-controllers-65f599f856-ftjvl\" (UID: \"34001644-e4ba-464a-bc44-9457515a4f0a\") " pod="calico-system/calico-kube-controllers-65f599f856-ftjvl" Jan 29 11:57:19.311536 kubelet[2592]: I0129 11:57:19.311357 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7fcf1af3-18c6-4f40-a2a4-51e333c1c84a-calico-apiserver-certs\") pod \"calico-apiserver-67b8685944-q45lw\" (UID: \"7fcf1af3-18c6-4f40-a2a4-51e333c1c84a\") " pod="calico-apiserver/calico-apiserver-67b8685944-q45lw" Jan 29 11:57:19.311536 kubelet[2592]: I0129 11:57:19.311379 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2c92084-f55b-4220-b802-d9b21f1f159e-calico-apiserver-certs\") pod \"calico-apiserver-67b8685944-5dgzn\" (UID: \"e2c92084-f55b-4220-b802-d9b21f1f159e\") " pod="calico-apiserver/calico-apiserver-67b8685944-5dgzn" Jan 29 11:57:19.311536 kubelet[2592]: I0129 11:57:19.311401 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h5pt\" (UniqueName: \"kubernetes.io/projected/2e64f57b-151d-456c-8ce1-abd59806b192-kube-api-access-4h5pt\") pod \"coredns-7db6d8ff4d-q8nvd\" (UID: \"2e64f57b-151d-456c-8ce1-abd59806b192\") " pod="kube-system/coredns-7db6d8ff4d-q8nvd" Jan 29 11:57:19.311536 kubelet[2592]: I0129 11:57:19.311424 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7-config-volume\") pod \"coredns-7db6d8ff4d-5ltr8\" (UID: \"47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7\") " pod="kube-system/coredns-7db6d8ff4d-5ltr8" Jan 29 11:57:19.452294 kubelet[2592]: E0129 11:57:19.452122 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:19.453590 containerd[1467]: time="2025-01-29T11:57:19.453286942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5ltr8,Uid:47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7,Namespace:kube-system,Attempt:0,}" Jan 29 11:57:19.460119 kubelet[2592]: E0129 11:57:19.460062 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:19.460977 containerd[1467]: time="2025-01-29T11:57:19.460922663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8nvd,Uid:2e64f57b-151d-456c-8ce1-abd59806b192,Namespace:kube-system,Attempt:0,}" Jan 29 11:57:19.467882 containerd[1467]: time="2025-01-29T11:57:19.467839750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b8685944-5dgzn,Uid:e2c92084-f55b-4220-b802-d9b21f1f159e,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:57:19.472641 containerd[1467]: time="2025-01-29T11:57:19.472578645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b8685944-q45lw,Uid:7fcf1af3-18c6-4f40-a2a4-51e333c1c84a,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:57:19.479546 containerd[1467]: time="2025-01-29T11:57:19.479512927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f599f856-ftjvl,Uid:34001644-e4ba-464a-bc44-9457515a4f0a,Namespace:calico-system,Attempt:0,}" Jan 29 11:57:19.553290 kubelet[2592]: E0129 11:57:19.553235 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:19.553956 containerd[1467]: time="2025-01-29T11:57:19.553897527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:57:19.753739 containerd[1467]: time="2025-01-29T11:57:19.753655644Z" level=error msg="Failed to destroy network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.754656 containerd[1467]: time="2025-01-29T11:57:19.754600057Z" level=error msg="encountered an error cleaning up failed sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.754763 containerd[1467]: time="2025-01-29T11:57:19.754704679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5ltr8,Uid:47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.766396 containerd[1467]: time="2025-01-29T11:57:19.766348737Z" level=error msg="Failed to destroy network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.767407 containerd[1467]: time="2025-01-29T11:57:19.767381719Z" level=error msg="encountered an error cleaning up failed sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.768141 containerd[1467]: time="2025-01-29T11:57:19.768067207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8nvd,Uid:2e64f57b-151d-456c-8ce1-abd59806b192,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.770369 kubelet[2592]: E0129 11:57:19.770308 2592 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.770447 kubelet[2592]: E0129 11:57:19.770401 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8nvd" Jan 29 11:57:19.770447 kubelet[2592]: E0129 11:57:19.770402 2592 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.770533 kubelet[2592]: E0129 11:57:19.770498 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5ltr8" Jan 29 11:57:19.770568 kubelet[2592]: E0129 11:57:19.770539 2592 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5ltr8" Jan 29 11:57:19.770648 kubelet[2592]: E0129 11:57:19.770602 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5ltr8_kube-system(47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5ltr8_kube-system(47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5ltr8" podUID="47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7" Jan 29 11:57:19.770942 kubelet[2592]: E0129 11:57:19.770428 2592 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8nvd" Jan 29 11:57:19.770991 kubelet[2592]: E0129 11:57:19.770961 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q8nvd_kube-system(2e64f57b-151d-456c-8ce1-abd59806b192)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q8nvd_kube-system(2e64f57b-151d-456c-8ce1-abd59806b192)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8nvd" podUID="2e64f57b-151d-456c-8ce1-abd59806b192" Jan 29 11:57:19.790091 containerd[1467]: time="2025-01-29T11:57:19.790009858Z" level=error msg="Failed to destroy network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.790973 containerd[1467]: time="2025-01-29T11:57:19.790415990Z" level=error msg="encountered an error cleaning up failed sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.790973 containerd[1467]: time="2025-01-29T11:57:19.790458526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b8685944-5dgzn,Uid:e2c92084-f55b-4220-b802-d9b21f1f159e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.791100 kubelet[2592]: E0129 11:57:19.790708 2592 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.791100 kubelet[2592]: E0129 11:57:19.790766 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b8685944-5dgzn" Jan 29 11:57:19.791100 kubelet[2592]: E0129 11:57:19.790784 2592 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b8685944-5dgzn" Jan 29 11:57:19.791238 kubelet[2592]: E0129 11:57:19.790826 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b8685944-5dgzn_calico-apiserver(e2c92084-f55b-4220-b802-d9b21f1f159e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b8685944-5dgzn_calico-apiserver(e2c92084-f55b-4220-b802-d9b21f1f159e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b8685944-5dgzn" podUID="e2c92084-f55b-4220-b802-d9b21f1f159e" Jan 29 11:57:19.795361 containerd[1467]: time="2025-01-29T11:57:19.795315310Z" level=error msg="Failed to destroy network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.795775 containerd[1467]: time="2025-01-29T11:57:19.795741845Z" level=error msg="encountered an error cleaning up failed sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.795833 containerd[1467]: time="2025-01-29T11:57:19.795793549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f599f856-ftjvl,Uid:34001644-e4ba-464a-bc44-9457515a4f0a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.796020 kubelet[2592]: E0129 11:57:19.795987 2592 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.796072 kubelet[2592]: E0129 11:57:19.796039 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65f599f856-ftjvl" Jan 29 11:57:19.796139 kubelet[2592]: E0129 11:57:19.796064 2592 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65f599f856-ftjvl" Jan 29 11:57:19.796203 kubelet[2592]: E0129 11:57:19.796138 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65f599f856-ftjvl_calico-system(34001644-e4ba-464a-bc44-9457515a4f0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65f599f856-ftjvl_calico-system(34001644-e4ba-464a-bc44-9457515a4f0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65f599f856-ftjvl" podUID="34001644-e4ba-464a-bc44-9457515a4f0a" Jan 29 11:57:19.797489 containerd[1467]: time="2025-01-29T11:57:19.797457779Z" level=error msg="Failed to destroy network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.797792 containerd[1467]: time="2025-01-29T11:57:19.797765021Z" level=error msg="encountered an error cleaning up failed sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.797864 containerd[1467]: time="2025-01-29T11:57:19.797800664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b8685944-q45lw,Uid:7fcf1af3-18c6-4f40-a2a4-51e333c1c84a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.797957 kubelet[2592]: E0129 11:57:19.797917 2592 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:19.797957 kubelet[2592]: E0129 11:57:19.797942 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b8685944-q45lw" Jan 29 11:57:19.798026 kubelet[2592]: E0129 11:57:19.797956 2592 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b8685944-q45lw" Jan 29 11:57:19.798026 kubelet[2592]: E0129 11:57:19.797986 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b8685944-q45lw_calico-apiserver(7fcf1af3-18c6-4f40-a2a4-51e333c1c84a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b8685944-q45lw_calico-apiserver(7fcf1af3-18c6-4f40-a2a4-51e333c1c84a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b8685944-q45lw" podUID="7fcf1af3-18c6-4f40-a2a4-51e333c1c84a" Jan 29 11:57:20.067845 systemd[1]: Created slice kubepods-besteffort-pod9ff3da2e_f5a9_4f2e_9b75_df1775a2ff51.slice - libcontainer container kubepods-besteffort-pod9ff3da2e_f5a9_4f2e_9b75_df1775a2ff51.slice. Jan 29 11:57:20.072882 containerd[1467]: time="2025-01-29T11:57:20.072840404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jtgqv,Uid:9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51,Namespace:calico-system,Attempt:0,}" Jan 29 11:57:20.141637 containerd[1467]: time="2025-01-29T11:57:20.141555793Z" level=error msg="Failed to destroy network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.142108 containerd[1467]: time="2025-01-29T11:57:20.142041394Z" level=error msg="encountered an error cleaning up failed sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.142147 containerd[1467]: time="2025-01-29T11:57:20.142103139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jtgqv,Uid:9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.142444 kubelet[2592]: E0129 11:57:20.142381 2592 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.142863 kubelet[2592]: E0129 11:57:20.142452 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jtgqv" Jan 29 11:57:20.142863 kubelet[2592]: E0129 11:57:20.142472 2592 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jtgqv" Jan 29 11:57:20.142863 kubelet[2592]: E0129 11:57:20.142523 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jtgqv_calico-system(9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jtgqv_calico-system(9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:20.145244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7-shm.mount: Deactivated successfully. Jan 29 11:57:20.555368 kubelet[2592]: I0129 11:57:20.555326 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:20.556398 kubelet[2592]: I0129 11:57:20.556363 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:20.558597 kubelet[2592]: I0129 11:57:20.558574 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:20.558979 containerd[1467]: time="2025-01-29T11:57:20.558936304Z" level=info msg="StopPodSandbox for \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\"" Jan 29 11:57:20.559188 containerd[1467]: time="2025-01-29T11:57:20.559143914Z" level=info msg="Ensure that sandbox dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7 in task-service has been cleanup successfully" Jan 29 11:57:20.559924 containerd[1467]: time="2025-01-29T11:57:20.559895452Z" level=info msg="StopPodSandbox for \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\"" Jan 29 11:57:20.560086 containerd[1467]: time="2025-01-29T11:57:20.560048352Z" level=info msg="Ensure that sandbox 75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a in task-service has been cleanup successfully" Jan 29 11:57:20.561494 containerd[1467]: time="2025-01-29T11:57:20.560473230Z" level=info msg="StopPodSandbox for \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\"" Jan 29 11:57:20.561494 containerd[1467]: time="2025-01-29T11:57:20.560616670Z" level=info msg="Ensure that sandbox 79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da in task-service has been cleanup successfully" Jan 29 11:57:20.562951 kubelet[2592]: I0129 11:57:20.562912 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:20.563933 containerd[1467]: time="2025-01-29T11:57:20.563602143Z" level=info msg="StopPodSandbox for \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\"" Jan 29 11:57:20.564240 containerd[1467]: time="2025-01-29T11:57:20.564200792Z" level=info msg="Ensure that sandbox 69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2 in task-service has been cleanup successfully" Jan 29 11:57:20.564750 kubelet[2592]: I0129 11:57:20.564722 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:20.565565 containerd[1467]: time="2025-01-29T11:57:20.565533375Z" level=info msg="StopPodSandbox for \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\"" Jan 29 11:57:20.565967 containerd[1467]: time="2025-01-29T11:57:20.565721365Z" level=info msg="Ensure that sandbox d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842 in task-service has been cleanup successfully" Jan 29 11:57:20.566635 kubelet[2592]: I0129 11:57:20.566609 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:20.567508 containerd[1467]: time="2025-01-29T11:57:20.567480790Z" level=info msg="StopPodSandbox for \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\"" Jan 29 11:57:20.567808 containerd[1467]: time="2025-01-29T11:57:20.567725034Z" level=info msg="Ensure that sandbox bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0 in task-service has been cleanup successfully" Jan 29 11:57:20.639532 containerd[1467]: time="2025-01-29T11:57:20.639457559Z" level=error msg="StopPodSandbox for \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\" failed" error="failed to destroy network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.639698 containerd[1467]: time="2025-01-29T11:57:20.639666231Z" level=error msg="StopPodSandbox for \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\" failed" error="failed to destroy network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.639792 containerd[1467]: time="2025-01-29T11:57:20.639767996Z" level=error msg="StopPodSandbox for \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\" failed" error="failed to destroy network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.640197 kubelet[2592]: E0129 11:57:20.639911 2592 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:20.640197 kubelet[2592]: E0129 11:57:20.639933 2592 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:20.640197 kubelet[2592]: E0129 11:57:20.640017 2592 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:20.640197 kubelet[2592]: E0129 11:57:20.639983 2592 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842"} Jan 29 11:57:20.640197 kubelet[2592]: E0129 11:57:20.640031 2592 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2"} Jan 29 11:57:20.640370 kubelet[2592]: E0129 11:57:20.640058 2592 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:57:20.640370 kubelet[2592]: E0129 11:57:20.640064 2592 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34001644-e4ba-464a-bc44-9457515a4f0a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:57:20.640370 kubelet[2592]: E0129 11:57:20.640083 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5ltr8" podUID="47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7" Jan 29 11:57:20.640518 kubelet[2592]: E0129 11:57:20.640100 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34001644-e4ba-464a-bc44-9457515a4f0a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65f599f856-ftjvl" podUID="34001644-e4ba-464a-bc44-9457515a4f0a" Jan 29 11:57:20.640518 kubelet[2592]: E0129 11:57:20.639988 2592 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a"} Jan 29 11:57:20.640518 kubelet[2592]: E0129 11:57:20.640178 2592 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e64f57b-151d-456c-8ce1-abd59806b192\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:57:20.640518 kubelet[2592]: E0129 11:57:20.640203 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e64f57b-151d-456c-8ce1-abd59806b192\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8nvd" podUID="2e64f57b-151d-456c-8ce1-abd59806b192" Jan 29 11:57:20.642102 containerd[1467]: time="2025-01-29T11:57:20.642045579Z" level=error msg="StopPodSandbox for \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\" failed" error="failed to destroy network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.642273 containerd[1467]: time="2025-01-29T11:57:20.642052964Z" level=error msg="StopPodSandbox for \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\" failed" error="failed to destroy network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.642340 kubelet[2592]: E0129 11:57:20.642256 2592 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:20.642340 kubelet[2592]: E0129 11:57:20.642278 2592 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7"} Jan 29 11:57:20.642340 kubelet[2592]: E0129 11:57:20.642299 2592 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:57:20.642340 kubelet[2592]: E0129 11:57:20.642315 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jtgqv" podUID="9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51" Jan 29 11:57:20.642503 containerd[1467]: time="2025-01-29T11:57:20.642294051Z" level=error msg="StopPodSandbox for \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\" failed" error="failed to destroy network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:57:20.642533 kubelet[2592]: E0129 11:57:20.642459 2592 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:20.642533 kubelet[2592]: E0129 11:57:20.642468 2592 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:20.642533 kubelet[2592]: E0129 11:57:20.642508 2592 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da"} Jan 29 11:57:20.642599 kubelet[2592]: E0129 11:57:20.642542 2592 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2c92084-f55b-4220-b802-d9b21f1f159e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:57:20.642599 kubelet[2592]: E0129 11:57:20.642487 2592 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0"} Jan 29 11:57:20.642599 kubelet[2592]: E0129 11:57:20.642567 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2c92084-f55b-4220-b802-d9b21f1f159e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b8685944-5dgzn" podUID="e2c92084-f55b-4220-b802-d9b21f1f159e" Jan 29 11:57:20.642599 kubelet[2592]: E0129 11:57:20.642587 2592 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fcf1af3-18c6-4f40-a2a4-51e333c1c84a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:57:20.642740 kubelet[2592]: E0129 11:57:20.642605 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fcf1af3-18c6-4f40-a2a4-51e333c1c84a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b8685944-q45lw" podUID="7fcf1af3-18c6-4f40-a2a4-51e333c1c84a" Jan 29 11:57:22.721414 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). Jan 29 11:57:23.438506 sshd[3824]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:23.440490 sshd[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:23.447417 systemd-logind[1452]: New session 10 of user core. Jan 29 11:57:23.458422 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:57:23.589330 sshd[3824]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:23.595370 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:39150.service: Deactivated successfully. Jan 29 11:57:23.598895 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:57:23.600000 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:57:23.601544 systemd-logind[1452]: Removed session 10. Jan 29 11:57:26.104820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379696917.mount: Deactivated successfully. Jan 29 11:57:27.373816 containerd[1467]: time="2025-01-29T11:57:27.373744909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:27.394460 containerd[1467]: time="2025-01-29T11:57:27.394419754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:57:27.420062 containerd[1467]: time="2025-01-29T11:57:27.419994621Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:27.453276 containerd[1467]: time="2025-01-29T11:57:27.453219637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:27.453957 containerd[1467]: time="2025-01-29T11:57:27.453908854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.899964371s" Jan 29 11:57:27.454009 containerd[1467]: time="2025-01-29T11:57:27.453958302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:57:27.462436 containerd[1467]: time="2025-01-29T11:57:27.462309401Z" level=info msg="CreateContainer within sandbox \"3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:57:27.790781 containerd[1467]: time="2025-01-29T11:57:27.790679635Z" level=info msg="CreateContainer within sandbox \"3e3e51de14fbfe3944ca9a3f8f8372b77ee915b2775b62a99efc0773296cd540\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"06fac806982fab5dc17b80997f75f08cc4ca7947de20407e89aca0a7aa3eed47\"" Jan 29 11:57:27.791436 containerd[1467]: time="2025-01-29T11:57:27.791379633Z" level=info msg="StartContainer for \"06fac806982fab5dc17b80997f75f08cc4ca7947de20407e89aca0a7aa3eed47\"" Jan 29 11:57:27.872347 systemd[1]: Started cri-containerd-06fac806982fab5dc17b80997f75f08cc4ca7947de20407e89aca0a7aa3eed47.scope - libcontainer container 06fac806982fab5dc17b80997f75f08cc4ca7947de20407e89aca0a7aa3eed47. Jan 29 11:57:27.990328 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:57:27.990569 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:57:28.601692 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:39164.service - OpenSSH per-connection server daemon (10.0.0.1:39164). Jan 29 11:57:28.814943 sshd[3892]: Accepted publickey for core from 10.0.0.1 port 39164 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:28.816816 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:28.821570 systemd-logind[1452]: New session 11 of user core. Jan 29 11:57:28.830331 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:57:28.891043 containerd[1467]: time="2025-01-29T11:57:28.890208176Z" level=info msg="StartContainer for \"06fac806982fab5dc17b80997f75f08cc4ca7947de20407e89aca0a7aa3eed47\" returns successfully" Jan 29 11:57:28.986083 sshd[3892]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:28.995913 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:39164.service: Deactivated successfully. Jan 29 11:57:28.998343 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:57:29.001053 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:57:29.009127 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:39170.service - OpenSSH per-connection server daemon (10.0.0.1:39170). Jan 29 11:57:29.011031 systemd-logind[1452]: Removed session 11. Jan 29 11:57:29.037570 sshd[3916]: Accepted publickey for core from 10.0.0.1 port 39170 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:29.039664 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:29.045609 systemd-logind[1452]: New session 12 of user core. Jan 29 11:57:29.054451 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:57:29.353688 sshd[3916]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:29.362301 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:39170.service: Deactivated successfully. Jan 29 11:57:29.366018 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:57:29.368742 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:57:29.377760 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:39184.service - OpenSSH per-connection server daemon (10.0.0.1:39184). Jan 29 11:57:29.379166 systemd-logind[1452]: Removed session 12. Jan 29 11:57:29.407459 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 39184 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:29.409384 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:29.413863 systemd-logind[1452]: New session 13 of user core. Jan 29 11:57:29.424495 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:57:29.542718 sshd[3940]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:29.547175 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:39184.service: Deactivated successfully. Jan 29 11:57:29.549700 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:57:29.550643 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:57:29.551756 systemd-logind[1452]: Removed session 13. Jan 29 11:57:29.900000 kubelet[2592]: E0129 11:57:29.899926 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:29.926574 systemd[1]: run-containerd-runc-k8s.io-06fac806982fab5dc17b80997f75f08cc4ca7947de20407e89aca0a7aa3eed47-runc.YKSXZD.mount: Deactivated successfully. Jan 29 11:57:30.125189 kubelet[2592]: I0129 11:57:30.125077 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9x6fb" podStartSLOduration=4.326828062 podStartE2EDuration="25.125059063s" podCreationTimestamp="2025-01-29 11:57:05 +0000 UTC" firstStartedPulling="2025-01-29 11:57:06.656505366 +0000 UTC m=+23.688564239" lastFinishedPulling="2025-01-29 11:57:27.454736367 +0000 UTC m=+44.486795240" observedRunningTime="2025-01-29 11:57:30.124795398 +0000 UTC m=+47.156854291" watchObservedRunningTime="2025-01-29 11:57:30.125059063 +0000 UTC m=+47.157117936" Jan 29 11:57:30.902171 kubelet[2592]: E0129 11:57:30.902105 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:31.321079 kubelet[2592]: I0129 11:57:31.321026 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:57:31.321773 kubelet[2592]: E0129 11:57:31.321756 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:31.502267 kernel: bpftool[4157]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:57:31.742352 systemd-networkd[1402]: vxlan.calico: Link UP Jan 29 11:57:31.742367 systemd-networkd[1402]: vxlan.calico: Gained carrier Jan 29 11:57:31.903780 kubelet[2592]: E0129 11:57:31.903740 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:32.062460 containerd[1467]: time="2025-01-29T11:57:32.062316982Z" level=info msg="StopPodSandbox for \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\"" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.196 [INFO][4247] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.196 [INFO][4247] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" iface="eth0" netns="/var/run/netns/cni-dc0b4354-e6f1-5db0-5e40-910d269ae224" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.197 [INFO][4247] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" iface="eth0" netns="/var/run/netns/cni-dc0b4354-e6f1-5db0-5e40-910d269ae224" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.197 [INFO][4247] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" iface="eth0" netns="/var/run/netns/cni-dc0b4354-e6f1-5db0-5e40-910d269ae224" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.197 [INFO][4247] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.197 [INFO][4247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.275 [INFO][4255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.276 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.276 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.282 [WARNING][4255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.282 [INFO][4255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.284 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:32.290398 containerd[1467]: 2025-01-29 11:57:32.287 [INFO][4247] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:32.290905 containerd[1467]: time="2025-01-29T11:57:32.290628070Z" level=info msg="TearDown network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\" successfully" Jan 29 11:57:32.290905 containerd[1467]: time="2025-01-29T11:57:32.290669000Z" level=info msg="StopPodSandbox for \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\" returns successfully" Jan 29 11:57:32.291139 kubelet[2592]: E0129 11:57:32.291110 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:32.291579 containerd[1467]: time="2025-01-29T11:57:32.291515123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8nvd,Uid:2e64f57b-151d-456c-8ce1-abd59806b192,Namespace:kube-system,Attempt:1,}" Jan 29 11:57:32.293473 systemd[1]: run-netns-cni\x2ddc0b4354\x2de6f1\x2d5db0\x2d5e40\x2d910d269ae224.mount: Deactivated successfully. Jan 29 11:57:33.048632 systemd-networkd[1402]: cali285289f0374: Link UP Jan 29 11:57:33.049399 systemd-networkd[1402]: cali285289f0374: Gained carrier Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:32.974 [INFO][4264] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0 coredns-7db6d8ff4d- kube-system 2e64f57b-151d-456c-8ce1-abd59806b192 931 0 2025-01-29 11:56:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-q8nvd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali285289f0374 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8nvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8nvd-" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:32.975 [INFO][4264] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8nvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.009 [INFO][4278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" HandleID="k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.016 [INFO][4278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" HandleID="k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132370), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-q8nvd", "timestamp":"2025-01-29 11:57:33.009117073 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.016 [INFO][4278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.016 [INFO][4278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.016 [INFO][4278] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.018 [INFO][4278] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.022 [INFO][4278] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.026 [INFO][4278] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.028 [INFO][4278] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.030 [INFO][4278] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.030 [INFO][4278] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.032 [INFO][4278] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.036 [INFO][4278] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.042 [INFO][4278] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.042 [INFO][4278] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" host="localhost" Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.042 [INFO][4278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:33.063659 containerd[1467]: 2025-01-29 11:57:33.042 [INFO][4278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" HandleID="k8s-pod-network.403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:33.064911 containerd[1467]: 2025-01-29 11:57:33.046 [INFO][4264] cni-plugin/k8s.go 386: Populated endpoint ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8nvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e64f57b-151d-456c-8ce1-abd59806b192", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-q8nvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali285289f0374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:33.064911 containerd[1467]: 2025-01-29 11:57:33.046 [INFO][4264] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8nvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:33.064911 containerd[1467]: 2025-01-29 11:57:33.046 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali285289f0374 ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8nvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:33.064911 containerd[1467]: 2025-01-29 11:57:33.049 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8nvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:33.064911 containerd[1467]: 2025-01-29 11:57:33.049 [INFO][4264] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8nvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e64f57b-151d-456c-8ce1-abd59806b192", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f", Pod:"coredns-7db6d8ff4d-q8nvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali285289f0374", MAC:"ba:88:e9:b6:02:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:33.064911 containerd[1467]: 2025-01-29 11:57:33.058 [INFO][4264] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8nvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:33.066813 containerd[1467]: time="2025-01-29T11:57:33.066779353Z" level=info msg="StopPodSandbox for \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\"" Jan 29 11:57:33.110962 containerd[1467]: time="2025-01-29T11:57:33.110721170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:33.110962 containerd[1467]: time="2025-01-29T11:57:33.110809705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:33.110962 containerd[1467]: time="2025-01-29T11:57:33.110840958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:33.111274 containerd[1467]: time="2025-01-29T11:57:33.111000705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:33.137393 systemd[1]: Started cri-containerd-403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f.scope - libcontainer container 403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f. Jan 29 11:57:33.153731 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.120 [INFO][4315] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.121 [INFO][4315] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" iface="eth0" netns="/var/run/netns/cni-df0d35de-ceed-d4ec-53e4-b65e374e94c5" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.121 [INFO][4315] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" iface="eth0" netns="/var/run/netns/cni-df0d35de-ceed-d4ec-53e4-b65e374e94c5" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.122 [INFO][4315] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" iface="eth0" netns="/var/run/netns/cni-df0d35de-ceed-d4ec-53e4-b65e374e94c5" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.122 [INFO][4315] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.122 [INFO][4315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.150 [INFO][4350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.150 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.150 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.156 [WARNING][4350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.156 [INFO][4350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.157 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:33.163608 containerd[1467]: 2025-01-29 11:57:33.160 [INFO][4315] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:33.164188 containerd[1467]: time="2025-01-29T11:57:33.164142319Z" level=info msg="TearDown network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\" successfully" Jan 29 11:57:33.164254 containerd[1467]: time="2025-01-29T11:57:33.164239221Z" level=info msg="StopPodSandbox for \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\" returns successfully" Jan 29 11:57:33.164925 containerd[1467]: time="2025-01-29T11:57:33.164898220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b8685944-5dgzn,Uid:e2c92084-f55b-4220-b802-d9b21f1f159e,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:57:33.179805 containerd[1467]: time="2025-01-29T11:57:33.179768649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8nvd,Uid:2e64f57b-151d-456c-8ce1-abd59806b192,Namespace:kube-system,Attempt:1,} returns sandbox id \"403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f\"" Jan 29 11:57:33.180498 kubelet[2592]: E0129 11:57:33.180477 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:33.182113 containerd[1467]: time="2025-01-29T11:57:33.182078696Z" level=info msg="CreateContainer within sandbox \"403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:57:33.213433 containerd[1467]: time="2025-01-29T11:57:33.213375743Z" level=info msg="CreateContainer within sandbox \"403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9127c9035f7bc33968f26e2c96e071a86348a36d0f280eeda9fe9335a480c55\"" Jan 29 11:57:33.214165 containerd[1467]: time="2025-01-29T11:57:33.214119459Z" level=info msg="StartContainer for \"d9127c9035f7bc33968f26e2c96e071a86348a36d0f280eeda9fe9335a480c55\"" Jan 29 11:57:33.249599 systemd[1]: Started cri-containerd-d9127c9035f7bc33968f26e2c96e071a86348a36d0f280eeda9fe9335a480c55.scope - libcontainer container d9127c9035f7bc33968f26e2c96e071a86348a36d0f280eeda9fe9335a480c55. Jan 29 11:57:33.296032 containerd[1467]: time="2025-01-29T11:57:33.295977106Z" level=info msg="StartContainer for \"d9127c9035f7bc33968f26e2c96e071a86348a36d0f280eeda9fe9335a480c55\" returns successfully" Jan 29 11:57:33.296164 systemd[1]: run-netns-cni\x2ddf0d35de\x2dceed\x2dd4ec\x2d53e4\x2db65e374e94c5.mount: Deactivated successfully. Jan 29 11:57:33.308666 systemd-networkd[1402]: calidb818898bda: Link UP Jan 29 11:57:33.310268 systemd-networkd[1402]: calidb818898bda: Gained carrier Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.223 [INFO][4373] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0 calico-apiserver-67b8685944- calico-apiserver e2c92084-f55b-4220-b802-d9b21f1f159e 940 0 2025-01-29 11:57:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b8685944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b8685944-5dgzn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb818898bda [] []}} ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-5dgzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--5dgzn-" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.223 [INFO][4373] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-5dgzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.255 [INFO][4400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" HandleID="k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.266 [INFO][4400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" HandleID="k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000281140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b8685944-5dgzn", "timestamp":"2025-01-29 11:57:33.255176297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.266 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.266 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.266 [INFO][4400] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.268 [INFO][4400] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.273 [INFO][4400] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.277 [INFO][4400] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.280 [INFO][4400] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.282 [INFO][4400] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.282 [INFO][4400] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.284 [INFO][4400] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.295 [INFO][4400] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.301 [INFO][4400] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.301 [INFO][4400] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" host="localhost" Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.301 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:33.326718 containerd[1467]: 2025-01-29 11:57:33.301 [INFO][4400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" HandleID="k8s-pod-network.10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.327514 containerd[1467]: 2025-01-29 11:57:33.304 [INFO][4373] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-5dgzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0", GenerateName:"calico-apiserver-67b8685944-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2c92084-f55b-4220-b802-d9b21f1f159e", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b8685944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b8685944-5dgzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb818898bda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:33.327514 containerd[1467]: 2025-01-29 11:57:33.304 [INFO][4373] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-5dgzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.327514 containerd[1467]: 2025-01-29 11:57:33.304 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb818898bda ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-5dgzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.327514 containerd[1467]: 2025-01-29 11:57:33.313 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-5dgzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.327514 containerd[1467]: 2025-01-29 11:57:33.313 [INFO][4373] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-5dgzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0", GenerateName:"calico-apiserver-67b8685944-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2c92084-f55b-4220-b802-d9b21f1f159e", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b8685944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba", Pod:"calico-apiserver-67b8685944-5dgzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb818898bda", MAC:"06:36:3c:27:d7:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:33.327514 containerd[1467]: 2025-01-29 11:57:33.322 [INFO][4373] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-5dgzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:33.331321 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Jan 29 11:57:33.353220 containerd[1467]: time="2025-01-29T11:57:33.353090948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:33.353849 containerd[1467]: time="2025-01-29T11:57:33.353799205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:33.353849 containerd[1467]: time="2025-01-29T11:57:33.353822330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:33.354031 containerd[1467]: time="2025-01-29T11:57:33.353920966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:33.386332 systemd[1]: Started cri-containerd-10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba.scope - libcontainer container 10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba. Jan 29 11:57:33.401621 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:33.429291 containerd[1467]: time="2025-01-29T11:57:33.429234849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b8685944-5dgzn,Uid:e2c92084-f55b-4220-b802-d9b21f1f159e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba\"" Jan 29 11:57:33.431269 containerd[1467]: time="2025-01-29T11:57:33.431239589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:57:33.908484 kubelet[2592]: E0129 11:57:33.908412 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:34.035682 kubelet[2592]: I0129 11:57:34.035351 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-q8nvd" podStartSLOduration=36.035329827 podStartE2EDuration="36.035329827s" podCreationTimestamp="2025-01-29 11:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:57:33.993997429 +0000 UTC m=+51.026056302" watchObservedRunningTime="2025-01-29 11:57:34.035329827 +0000 UTC m=+51.067388700" Jan 29 11:57:34.062298 containerd[1467]: time="2025-01-29T11:57:34.062250919Z" level=info msg="StopPodSandbox for \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\"" Jan 29 11:57:34.062515 containerd[1467]: time="2025-01-29T11:57:34.062263806Z" level=info msg="StopPodSandbox for \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\"" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.110 [INFO][4505] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.111 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" iface="eth0" netns="/var/run/netns/cni-c934e912-f48b-d5e5-b5d1-b43c67015696" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.113 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" iface="eth0" netns="/var/run/netns/cni-c934e912-f48b-d5e5-b5d1-b43c67015696" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.113 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" iface="eth0" netns="/var/run/netns/cni-c934e912-f48b-d5e5-b5d1-b43c67015696" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.113 [INFO][4505] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.113 [INFO][4505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.139 [INFO][4538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.139 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.139 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.146 [WARNING][4538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.146 [INFO][4538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.147 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:34.152560 containerd[1467]: 2025-01-29 11:57:34.150 [INFO][4505] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:34.155370 containerd[1467]: time="2025-01-29T11:57:34.155260626Z" level=info msg="TearDown network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\" successfully" Jan 29 11:57:34.155370 containerd[1467]: time="2025-01-29T11:57:34.155307569Z" level=info msg="StopPodSandbox for \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\" returns successfully" Jan 29 11:57:34.155630 systemd[1]: run-netns-cni\x2dc934e912\x2df48b\x2dd5e5\x2db5d1\x2db43c67015696.mount: Deactivated successfully. Jan 29 11:57:34.156127 containerd[1467]: time="2025-01-29T11:57:34.156043990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b8685944-q45lw,Uid:7fcf1af3-18c6-4f40-a2a4-51e333c1c84a,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.123 [INFO][4527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.123 [INFO][4527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" iface="eth0" netns="/var/run/netns/cni-2ede62ee-a850-e29c-fa83-4ac5968e0ce5" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.124 [INFO][4527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" iface="eth0" netns="/var/run/netns/cni-2ede62ee-a850-e29c-fa83-4ac5968e0ce5" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.124 [INFO][4527] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" iface="eth0" netns="/var/run/netns/cni-2ede62ee-a850-e29c-fa83-4ac5968e0ce5" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.124 [INFO][4527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.124 [INFO][4527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.149 [INFO][4544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.150 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.150 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.154 [WARNING][4544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.154 [INFO][4544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.157 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:34.162678 containerd[1467]: 2025-01-29 11:57:34.159 [INFO][4527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:34.163145 containerd[1467]: time="2025-01-29T11:57:34.162785666Z" level=info msg="TearDown network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\" successfully" Jan 29 11:57:34.163145 containerd[1467]: time="2025-01-29T11:57:34.162810605Z" level=info msg="StopPodSandbox for \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\" returns successfully" Jan 29 11:57:34.163896 containerd[1467]: time="2025-01-29T11:57:34.163638017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jtgqv,Uid:9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51,Namespace:calico-system,Attempt:1,}" Jan 29 11:57:34.294824 systemd-networkd[1402]: cali9b41d6d01b0: Link UP Jan 29 11:57:34.297066 systemd-networkd[1402]: cali9b41d6d01b0: Gained carrier Jan 29 11:57:34.299561 systemd[1]: run-netns-cni\x2d2ede62ee\x2da850\x2de29c\x2dfa83\x2d4ac5968e0ce5.mount: Deactivated successfully. Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.206 [INFO][4552] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0 calico-apiserver-67b8685944- calico-apiserver 7fcf1af3-18c6-4f40-a2a4-51e333c1c84a 961 0 2025-01-29 11:57:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b8685944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b8685944-q45lw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b41d6d01b0 [] []}} ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-q45lw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--q45lw-" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.206 [INFO][4552] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-q45lw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.242 [INFO][4578] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" HandleID="k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.256 [INFO][4578] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" HandleID="k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000289730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b8685944-q45lw", "timestamp":"2025-01-29 11:57:34.242935788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.256 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.256 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.256 [INFO][4578] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.258 [INFO][4578] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.263 [INFO][4578] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.268 [INFO][4578] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.270 [INFO][4578] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.272 [INFO][4578] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.272 [INFO][4578] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.274 [INFO][4578] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62 Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.279 [INFO][4578] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.284 [INFO][4578] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.284 [INFO][4578] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" host="localhost" Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.284 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:34.313939 containerd[1467]: 2025-01-29 11:57:34.284 [INFO][4578] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" HandleID="k8s-pod-network.a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.315453 containerd[1467]: 2025-01-29 11:57:34.287 [INFO][4552] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-q45lw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0", GenerateName:"calico-apiserver-67b8685944-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fcf1af3-18c6-4f40-a2a4-51e333c1c84a", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b8685944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b8685944-q45lw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b41d6d01b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:34.315453 containerd[1467]: 2025-01-29 11:57:34.287 [INFO][4552] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-q45lw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.315453 containerd[1467]: 2025-01-29 11:57:34.287 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b41d6d01b0 ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-q45lw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.315453 containerd[1467]: 2025-01-29 11:57:34.296 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-q45lw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.315453 containerd[1467]: 2025-01-29 11:57:34.301 [INFO][4552] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-q45lw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0", GenerateName:"calico-apiserver-67b8685944-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fcf1af3-18c6-4f40-a2a4-51e333c1c84a", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b8685944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62", Pod:"calico-apiserver-67b8685944-q45lw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b41d6d01b0", MAC:"56:e1:d7:13:36:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:34.315453 containerd[1467]: 2025-01-29 11:57:34.310 [INFO][4552] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62" Namespace="calico-apiserver" Pod="calico-apiserver-67b8685944-q45lw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:34.329098 systemd-networkd[1402]: caliec8f72200dc: Link UP Jan 29 11:57:34.329402 systemd-networkd[1402]: caliec8f72200dc: Gained carrier Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.233 [INFO][4565] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jtgqv-eth0 csi-node-driver- calico-system 9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51 962 0 2025-01-29 11:57:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jtgqv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliec8f72200dc [] []}} ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Namespace="calico-system" Pod="csi-node-driver-jtgqv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jtgqv-" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.233 [INFO][4565] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Namespace="calico-system" Pod="csi-node-driver-jtgqv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.262 [INFO][4586] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" HandleID="k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.270 [INFO][4586] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" HandleID="k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jtgqv", "timestamp":"2025-01-29 11:57:34.262776995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.270 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.284 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.284 [INFO][4586] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.286 [INFO][4586] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.294 [INFO][4586] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.301 [INFO][4586] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.304 [INFO][4586] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.306 [INFO][4586] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.306 [INFO][4586] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.307 [INFO][4586] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.311 [INFO][4586] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.319 [INFO][4586] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.319 [INFO][4586] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" host="localhost" Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.319 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:34.349535 containerd[1467]: 2025-01-29 11:57:34.319 [INFO][4586] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" HandleID="k8s-pod-network.9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.350360 containerd[1467]: 2025-01-29 11:57:34.323 [INFO][4565] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Namespace="calico-system" Pod="csi-node-driver-jtgqv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jtgqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jtgqv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jtgqv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec8f72200dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:34.350360 containerd[1467]: 2025-01-29 11:57:34.324 [INFO][4565] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Namespace="calico-system" Pod="csi-node-driver-jtgqv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.350360 containerd[1467]: 2025-01-29 11:57:34.324 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec8f72200dc ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Namespace="calico-system" Pod="csi-node-driver-jtgqv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.350360 containerd[1467]: 2025-01-29 11:57:34.329 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Namespace="calico-system" Pod="csi-node-driver-jtgqv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.350360 containerd[1467]: 2025-01-29 11:57:34.331 [INFO][4565] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Namespace="calico-system" Pod="csi-node-driver-jtgqv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jtgqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jtgqv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f", Pod:"csi-node-driver-jtgqv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec8f72200dc", MAC:"a2:d5:cf:27:7b:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:34.350360 containerd[1467]: 2025-01-29 11:57:34.345 [INFO][4565] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f" Namespace="calico-system" Pod="csi-node-driver-jtgqv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:34.352184 containerd[1467]: time="2025-01-29T11:57:34.352019666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:34.352345 containerd[1467]: time="2025-01-29T11:57:34.352126789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:34.352564 containerd[1467]: time="2025-01-29T11:57:34.352298751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:34.354184 containerd[1467]: time="2025-01-29T11:57:34.354055326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:34.379808 containerd[1467]: time="2025-01-29T11:57:34.379431783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:34.379808 containerd[1467]: time="2025-01-29T11:57:34.379514227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:34.379808 containerd[1467]: time="2025-01-29T11:57:34.379533225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:34.379808 containerd[1467]: time="2025-01-29T11:57:34.379673103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:34.379835 systemd[1]: Started cri-containerd-a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62.scope - libcontainer container a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62. Jan 29 11:57:34.399229 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:34.406346 systemd[1]: Started cri-containerd-9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f.scope - libcontainer container 9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f. Jan 29 11:57:34.420231 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:34.430038 containerd[1467]: time="2025-01-29T11:57:34.430001106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b8685944-q45lw,Uid:7fcf1af3-18c6-4f40-a2a4-51e333c1c84a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62\"" Jan 29 11:57:34.437332 containerd[1467]: time="2025-01-29T11:57:34.437276681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jtgqv,Uid:9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51,Namespace:calico-system,Attempt:1,} returns sandbox id \"9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f\"" Jan 29 11:57:34.558333 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:34624.service - OpenSSH per-connection server daemon (10.0.0.1:34624). Jan 29 11:57:34.596522 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 34624 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:34.598486 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:34.602970 systemd-logind[1452]: New session 14 of user core. Jan 29 11:57:34.618313 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:57:34.770974 sshd[4705]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:34.776525 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:34624.service: Deactivated successfully. Jan 29 11:57:34.779342 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:57:34.780109 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:57:34.781314 systemd-logind[1452]: Removed session 14. Jan 29 11:57:34.866545 systemd-networkd[1402]: cali285289f0374: Gained IPv6LL Jan 29 11:57:34.924801 kubelet[2592]: E0129 11:57:34.924766 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:34.994346 systemd-networkd[1402]: calidb818898bda: Gained IPv6LL Jan 29 11:57:35.634437 systemd-networkd[1402]: caliec8f72200dc: Gained IPv6LL Jan 29 11:57:35.826406 systemd-networkd[1402]: cali9b41d6d01b0: Gained IPv6LL Jan 29 11:57:35.926907 kubelet[2592]: E0129 11:57:35.926742 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:36.063227 containerd[1467]: time="2025-01-29T11:57:36.063142123Z" level=info msg="StopPodSandbox for \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\"" Jan 29 11:57:36.063995 containerd[1467]: time="2025-01-29T11:57:36.063449853Z" level=info msg="StopPodSandbox for \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\"" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.124 [INFO][4760] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.124 [INFO][4760] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" iface="eth0" netns="/var/run/netns/cni-736ca80d-a4a1-5460-67b5-68ab4e080a08" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.125 [INFO][4760] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" iface="eth0" netns="/var/run/netns/cni-736ca80d-a4a1-5460-67b5-68ab4e080a08" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.125 [INFO][4760] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" iface="eth0" netns="/var/run/netns/cni-736ca80d-a4a1-5460-67b5-68ab4e080a08" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.126 [INFO][4760] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.126 [INFO][4760] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.153 [INFO][4775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.154 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.154 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.159 [WARNING][4775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.159 [INFO][4775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.161 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:36.165787 containerd[1467]: 2025-01-29 11:57:36.163 [INFO][4760] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:36.168401 containerd[1467]: time="2025-01-29T11:57:36.168344453Z" level=info msg="TearDown network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\" successfully" Jan 29 11:57:36.168452 containerd[1467]: time="2025-01-29T11:57:36.168400693Z" level=info msg="StopPodSandbox for \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\" returns successfully" Jan 29 11:57:36.168911 kubelet[2592]: E0129 11:57:36.168876 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:36.168974 systemd[1]: run-netns-cni\x2d736ca80d\x2da4a1\x2d5460\x2d67b5\x2d68ab4e080a08.mount: Deactivated successfully. Jan 29 11:57:36.169527 containerd[1467]: time="2025-01-29T11:57:36.169227531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5ltr8,Uid:47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7,Namespace:kube-system,Attempt:1,}" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.142 [INFO][4759] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.142 [INFO][4759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" iface="eth0" netns="/var/run/netns/cni-82c43c4e-0cee-c89c-8339-298178e36d63" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.143 [INFO][4759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" iface="eth0" netns="/var/run/netns/cni-82c43c4e-0cee-c89c-8339-298178e36d63" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.143 [INFO][4759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" iface="eth0" netns="/var/run/netns/cni-82c43c4e-0cee-c89c-8339-298178e36d63" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.144 [INFO][4759] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.144 [INFO][4759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.251 [INFO][4781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.251 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.252 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.257 [WARNING][4781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.257 [INFO][4781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.259 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:36.264390 containerd[1467]: 2025-01-29 11:57:36.262 [INFO][4759] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:36.264867 containerd[1467]: time="2025-01-29T11:57:36.264590658Z" level=info msg="TearDown network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\" successfully" Jan 29 11:57:36.264867 containerd[1467]: time="2025-01-29T11:57:36.264616600Z" level=info msg="StopPodSandbox for \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\" returns successfully" Jan 29 11:57:36.265306 containerd[1467]: time="2025-01-29T11:57:36.265280945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f599f856-ftjvl,Uid:34001644-e4ba-464a-bc44-9457515a4f0a,Namespace:calico-system,Attempt:1,}" Jan 29 11:57:36.267988 systemd[1]: run-netns-cni\x2d82c43c4e\x2d0cee\x2dc89c\x2d8339\x2d298178e36d63.mount: Deactivated successfully. Jan 29 11:57:36.861304 containerd[1467]: time="2025-01-29T11:57:36.861240396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:36.891070 containerd[1467]: time="2025-01-29T11:57:36.890974911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:57:36.922455 containerd[1467]: time="2025-01-29T11:57:36.921849464Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:36.983057 containerd[1467]: time="2025-01-29T11:57:36.982998512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:36.983775 containerd[1467]: time="2025-01-29T11:57:36.983711654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.552434871s" Jan 29 11:57:36.983775 containerd[1467]: time="2025-01-29T11:57:36.983771243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:57:36.984858 containerd[1467]: time="2025-01-29T11:57:36.984700033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:57:36.986445 containerd[1467]: time="2025-01-29T11:57:36.986402976Z" level=info msg="CreateContainer within sandbox \"10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:57:37.243744 systemd-networkd[1402]: cali1f283209828: Link UP Jan 29 11:57:37.244590 systemd-networkd[1402]: cali1f283209828: Gained carrier Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.885 [INFO][4793] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0 coredns-7db6d8ff4d- kube-system 47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7 979 0 2025-01-29 11:56:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-5ltr8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f283209828 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5ltr8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5ltr8-" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.885 [INFO][4793] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5ltr8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.919 [INFO][4822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" HandleID="k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.932 [INFO][4822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" HandleID="k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003087e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-5ltr8", "timestamp":"2025-01-29 11:57:36.919055499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.932 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.932 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.932 [INFO][4822] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.935 [INFO][4822] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.939 [INFO][4822] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.943 [INFO][4822] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.945 [INFO][4822] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.947 [INFO][4822] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.947 [INFO][4822] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:36.948 [INFO][4822] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898 Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:37.018 [INFO][4822] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:37.238 [INFO][4822] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:37.238 [INFO][4822] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" host="localhost" Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:37.238 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:37.522302 containerd[1467]: 2025-01-29 11:57:37.238 [INFO][4822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" HandleID="k8s-pod-network.37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:37.523519 containerd[1467]: 2025-01-29 11:57:37.240 [INFO][4793] cni-plugin/k8s.go 386: Populated endpoint ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5ltr8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-5ltr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f283209828", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:37.523519 containerd[1467]: 2025-01-29 11:57:37.241 [INFO][4793] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5ltr8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:37.523519 containerd[1467]: 2025-01-29 11:57:37.241 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f283209828 ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5ltr8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:37.523519 containerd[1467]: 2025-01-29 11:57:37.244 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5ltr8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:37.523519 containerd[1467]: 2025-01-29 11:57:37.245 [INFO][4793] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5ltr8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898", Pod:"coredns-7db6d8ff4d-5ltr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f283209828", MAC:"da:8e:98:02:ce:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:37.523519 containerd[1467]: 2025-01-29 11:57:37.517 [INFO][4793] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5ltr8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:37.660668 containerd[1467]: time="2025-01-29T11:57:37.660561615Z" level=info msg="CreateContainer within sandbox \"10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb8f9f38f29a7e1c6eeab1638fbd060d23d205ebde15d2a63a6183d9fbcb281e\"" Jan 29 11:57:37.661306 containerd[1467]: time="2025-01-29T11:57:37.661240028Z" level=info msg="StartContainer for \"fb8f9f38f29a7e1c6eeab1638fbd060d23d205ebde15d2a63a6183d9fbcb281e\"" Jan 29 11:57:37.696575 containerd[1467]: time="2025-01-29T11:57:37.696511012Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:37.698611 containerd[1467]: time="2025-01-29T11:57:37.698239113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:57:37.704190 systemd-networkd[1402]: cali7fefa4fea3c: Link UP Jan 29 11:57:37.705790 systemd-networkd[1402]: cali7fefa4fea3c: Gained carrier Jan 29 11:57:37.712093 containerd[1467]: time="2025-01-29T11:57:37.711572398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:37.712093 containerd[1467]: time="2025-01-29T11:57:37.711649751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:37.712736 containerd[1467]: time="2025-01-29T11:57:37.711671033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:37.715196 containerd[1467]: time="2025-01-29T11:57:37.715102887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 730.363125ms" Jan 29 11:57:37.715196 containerd[1467]: time="2025-01-29T11:57:37.715182726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:57:37.717657 systemd[1]: Started cri-containerd-fb8f9f38f29a7e1c6eeab1638fbd060d23d205ebde15d2a63a6183d9fbcb281e.scope - libcontainer container fb8f9f38f29a7e1c6eeab1638fbd060d23d205ebde15d2a63a6183d9fbcb281e. Jan 29 11:57:37.718919 containerd[1467]: time="2025-01-29T11:57:37.718859724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:57:37.719912 containerd[1467]: time="2025-01-29T11:57:37.719748604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:36.891 [INFO][4809] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0 calico-kube-controllers-65f599f856- calico-system 34001644-e4ba-464a-bc44-9457515a4f0a 980 0 2025-01-29 11:57:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65f599f856 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-65f599f856-ftjvl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7fefa4fea3c [] []}} ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Namespace="calico-system" Pod="calico-kube-controllers-65f599f856-ftjvl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:36.891 [INFO][4809] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Namespace="calico-system" Pod="calico-kube-controllers-65f599f856-ftjvl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:36.941 [INFO][4823] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" HandleID="k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:36.950 [INFO][4823] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" HandleID="k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddb10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-65f599f856-ftjvl", "timestamp":"2025-01-29 11:57:36.941289916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:36.950 [INFO][4823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.238 [INFO][4823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.238 [INFO][4823] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.241 [INFO][4823] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.247 [INFO][4823] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.516 [INFO][4823] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.521 [INFO][4823] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.582 [INFO][4823] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.582 [INFO][4823] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.665 [INFO][4823] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271 Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.674 [INFO][4823] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.689 [INFO][4823] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.690 [INFO][4823] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" host="localhost" Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.690 [INFO][4823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:37.739062 containerd[1467]: 2025-01-29 11:57:37.690 [INFO][4823] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" HandleID="k8s-pod-network.d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:37.739780 containerd[1467]: 2025-01-29 11:57:37.697 [INFO][4809] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Namespace="calico-system" Pod="calico-kube-controllers-65f599f856-ftjvl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0", GenerateName:"calico-kube-controllers-65f599f856-", Namespace:"calico-system", SelfLink:"", UID:"34001644-e4ba-464a-bc44-9457515a4f0a", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f599f856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-65f599f856-ftjvl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7fefa4fea3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:37.739780 containerd[1467]: 2025-01-29 11:57:37.697 [INFO][4809] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Namespace="calico-system" Pod="calico-kube-controllers-65f599f856-ftjvl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:37.739780 containerd[1467]: 2025-01-29 11:57:37.697 [INFO][4809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fefa4fea3c ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Namespace="calico-system" Pod="calico-kube-controllers-65f599f856-ftjvl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:37.739780 containerd[1467]: 2025-01-29 11:57:37.706 [INFO][4809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Namespace="calico-system" Pod="calico-kube-controllers-65f599f856-ftjvl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:37.739780 containerd[1467]: 2025-01-29 11:57:37.710 [INFO][4809] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Namespace="calico-system" Pod="calico-kube-controllers-65f599f856-ftjvl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0", GenerateName:"calico-kube-controllers-65f599f856-", Namespace:"calico-system", SelfLink:"", UID:"34001644-e4ba-464a-bc44-9457515a4f0a", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f599f856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271", Pod:"calico-kube-controllers-65f599f856-ftjvl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7fefa4fea3c", MAC:"76:c5:84:b5:7b:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:37.739780 containerd[1467]: 2025-01-29 11:57:37.730 [INFO][4809] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271" Namespace="calico-system" Pod="calico-kube-controllers-65f599f856-ftjvl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:37.739780 containerd[1467]: time="2025-01-29T11:57:37.738628009Z" level=info msg="CreateContainer within sandbox \"a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:57:37.762553 systemd[1]: Started cri-containerd-37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898.scope - libcontainer container 37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898. Jan 29 11:57:37.782252 containerd[1467]: time="2025-01-29T11:57:37.780388249Z" level=info msg="CreateContainer within sandbox \"a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe7325c50d6ffab761e36ea054b520b773a05217875d96b8bea5f145623bddcc\"" Jan 29 11:57:37.790704 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:37.793419 containerd[1467]: time="2025-01-29T11:57:37.792363296Z" level=info msg="StartContainer for \"fe7325c50d6ffab761e36ea054b520b773a05217875d96b8bea5f145623bddcc\"" Jan 29 11:57:37.793683 containerd[1467]: time="2025-01-29T11:57:37.790527732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:37.793683 containerd[1467]: time="2025-01-29T11:57:37.790603191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:37.793683 containerd[1467]: time="2025-01-29T11:57:37.790618151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:37.793683 containerd[1467]: time="2025-01-29T11:57:37.790728419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:37.822394 systemd[1]: Started cri-containerd-d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271.scope - libcontainer container d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271. Jan 29 11:57:37.855425 systemd[1]: Started cri-containerd-fe7325c50d6ffab761e36ea054b520b773a05217875d96b8bea5f145623bddcc.scope - libcontainer container fe7325c50d6ffab761e36ea054b520b773a05217875d96b8bea5f145623bddcc. Jan 29 11:57:37.866861 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:57:37.895386 containerd[1467]: time="2025-01-29T11:57:37.895343601Z" level=info msg="StartContainer for \"fb8f9f38f29a7e1c6eeab1638fbd060d23d205ebde15d2a63a6183d9fbcb281e\" returns successfully" Jan 29 11:57:37.897807 containerd[1467]: time="2025-01-29T11:57:37.895653965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5ltr8,Uid:47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7,Namespace:kube-system,Attempt:1,} returns sandbox id \"37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898\"" Jan 29 11:57:37.898417 kubelet[2592]: E0129 11:57:37.898396 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:37.900645 containerd[1467]: time="2025-01-29T11:57:37.900337878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f599f856-ftjvl,Uid:34001644-e4ba-464a-bc44-9457515a4f0a,Namespace:calico-system,Attempt:1,} returns sandbox id \"d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271\"" Jan 29 11:57:37.910021 containerd[1467]: time="2025-01-29T11:57:37.909798276Z" level=info msg="CreateContainer within sandbox \"37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:57:37.925017 containerd[1467]: time="2025-01-29T11:57:37.924962265Z" level=info msg="StartContainer for \"fe7325c50d6ffab761e36ea054b520b773a05217875d96b8bea5f145623bddcc\" returns successfully" Jan 29 11:57:37.939183 containerd[1467]: time="2025-01-29T11:57:37.938839988Z" level=info msg="CreateContainer within sandbox \"37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e187b46c8b7c329708cf11ae0822522252107bf1007f84ebd3c0d20c236e510a\"" Jan 29 11:57:37.939674 containerd[1467]: time="2025-01-29T11:57:37.939636996Z" level=info msg="StartContainer for \"e187b46c8b7c329708cf11ae0822522252107bf1007f84ebd3c0d20c236e510a\"" Jan 29 11:57:37.956485 kubelet[2592]: I0129 11:57:37.956402 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67b8685944-q45lw" podStartSLOduration=29.66912906 podStartE2EDuration="32.956380733s" podCreationTimestamp="2025-01-29 11:57:05 +0000 UTC" firstStartedPulling="2025-01-29 11:57:34.431121719 +0000 UTC m=+51.463180582" lastFinishedPulling="2025-01-29 11:57:37.718373382 +0000 UTC m=+54.750432255" observedRunningTime="2025-01-29 11:57:37.952443769 +0000 UTC m=+54.984502663" watchObservedRunningTime="2025-01-29 11:57:37.956380733 +0000 UTC m=+54.988439607" Jan 29 11:57:37.982216 systemd[1]: Started cri-containerd-e187b46c8b7c329708cf11ae0822522252107bf1007f84ebd3c0d20c236e510a.scope - libcontainer container e187b46c8b7c329708cf11ae0822522252107bf1007f84ebd3c0d20c236e510a. Jan 29 11:57:38.103141 containerd[1467]: time="2025-01-29T11:57:38.103013543Z" level=info msg="StartContainer for \"e187b46c8b7c329708cf11ae0822522252107bf1007f84ebd3c0d20c236e510a\" returns successfully" Jan 29 11:57:38.668459 systemd[1]: run-containerd-runc-k8s.io-37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898-runc.cuCTg6.mount: Deactivated successfully. Jan 29 11:57:38.950467 kubelet[2592]: I0129 11:57:38.950339 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:57:38.951060 kubelet[2592]: E0129 11:57:38.950778 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:38.952274 kubelet[2592]: I0129 11:57:38.951178 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:57:39.000460 kubelet[2592]: I0129 11:57:38.999733 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67b8685944-5dgzn" podStartSLOduration=30.445966611 podStartE2EDuration="33.999712651s" podCreationTimestamp="2025-01-29 11:57:05 +0000 UTC" firstStartedPulling="2025-01-29 11:57:33.430840207 +0000 UTC m=+50.462899080" lastFinishedPulling="2025-01-29 11:57:36.984586247 +0000 UTC m=+54.016645120" observedRunningTime="2025-01-29 11:57:37.973664169 +0000 UTC m=+55.005723062" watchObservedRunningTime="2025-01-29 11:57:38.999712651 +0000 UTC m=+56.031771524" Jan 29 11:57:39.081773 kubelet[2592]: I0129 11:57:39.081690 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5ltr8" podStartSLOduration=41.081665295 podStartE2EDuration="41.081665295s" podCreationTimestamp="2025-01-29 11:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:57:39.000939147 +0000 UTC m=+56.032998040" watchObservedRunningTime="2025-01-29 11:57:39.081665295 +0000 UTC m=+56.113724168" Jan 29 11:57:39.154595 systemd-networkd[1402]: cali1f283209828: Gained IPv6LL Jan 29 11:57:39.253359 containerd[1467]: time="2025-01-29T11:57:39.253296827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:39.254345 containerd[1467]: time="2025-01-29T11:57:39.254296324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:57:39.261438 containerd[1467]: time="2025-01-29T11:57:39.261398697Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:39.264068 containerd[1467]: time="2025-01-29T11:57:39.263997425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:39.264630 containerd[1467]: time="2025-01-29T11:57:39.264598062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.545703729s" Jan 29 11:57:39.264670 containerd[1467]: time="2025-01-29T11:57:39.264628983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:57:39.265781 containerd[1467]: time="2025-01-29T11:57:39.265740200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:57:39.267142 containerd[1467]: time="2025-01-29T11:57:39.267099478Z" level=info msg="CreateContainer within sandbox \"9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:57:39.286647 containerd[1467]: time="2025-01-29T11:57:39.286587188Z" level=info msg="CreateContainer within sandbox \"9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a43a8c1fc663f078057b05d5de310d54de98d4aac4dfa8acf9de89de8efcb636\"" Jan 29 11:57:39.287116 containerd[1467]: time="2025-01-29T11:57:39.287073429Z" level=info msg="StartContainer for \"a43a8c1fc663f078057b05d5de310d54de98d4aac4dfa8acf9de89de8efcb636\"" Jan 29 11:57:39.326450 systemd[1]: Started cri-containerd-a43a8c1fc663f078057b05d5de310d54de98d4aac4dfa8acf9de89de8efcb636.scope - libcontainer container a43a8c1fc663f078057b05d5de310d54de98d4aac4dfa8acf9de89de8efcb636. Jan 29 11:57:39.394241 containerd[1467]: time="2025-01-29T11:57:39.394139295Z" level=info msg="StartContainer for \"a43a8c1fc663f078057b05d5de310d54de98d4aac4dfa8acf9de89de8efcb636\" returns successfully" Jan 29 11:57:39.538385 systemd-networkd[1402]: cali7fefa4fea3c: Gained IPv6LL Jan 29 11:57:39.795090 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:34640.service - OpenSSH per-connection server daemon (10.0.0.1:34640). Jan 29 11:57:39.837000 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 34640 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:39.839272 sshd[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:39.844146 systemd-logind[1452]: New session 15 of user core. Jan 29 11:57:39.854357 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:57:39.959455 kubelet[2592]: E0129 11:57:39.959425 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:39.986770 sshd[5146]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:39.991799 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:34640.service: Deactivated successfully. Jan 29 11:57:39.994734 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:57:39.995662 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:57:39.997009 systemd-logind[1452]: Removed session 15. Jan 29 11:57:40.960760 kubelet[2592]: E0129 11:57:40.960706 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:41.240843 containerd[1467]: time="2025-01-29T11:57:41.240672313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:41.241718 containerd[1467]: time="2025-01-29T11:57:41.241672648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:57:41.243062 containerd[1467]: time="2025-01-29T11:57:41.243023606Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:41.246047 containerd[1467]: time="2025-01-29T11:57:41.245953383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:41.246663 containerd[1467]: time="2025-01-29T11:57:41.246625390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.980848607s" Jan 29 11:57:41.246741 containerd[1467]: time="2025-01-29T11:57:41.246661270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:57:41.247991 containerd[1467]: time="2025-01-29T11:57:41.247950526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:57:41.260768 containerd[1467]: time="2025-01-29T11:57:41.260709535Z" level=info msg="CreateContainer within sandbox \"d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:57:41.277514 containerd[1467]: time="2025-01-29T11:57:41.277456680Z" level=info msg="CreateContainer within sandbox \"d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8f565d3af28c2914f3913665e65d811b1bbaf466e34747b87bd1515af16f6f55\"" Jan 29 11:57:41.278200 containerd[1467]: time="2025-01-29T11:57:41.278110581Z" level=info msg="StartContainer for \"8f565d3af28c2914f3913665e65d811b1bbaf466e34747b87bd1515af16f6f55\"" Jan 29 11:57:41.327422 systemd[1]: Started cri-containerd-8f565d3af28c2914f3913665e65d811b1bbaf466e34747b87bd1515af16f6f55.scope - libcontainer container 8f565d3af28c2914f3913665e65d811b1bbaf466e34747b87bd1515af16f6f55. Jan 29 11:57:41.372663 containerd[1467]: time="2025-01-29T11:57:41.372610054Z" level=info msg="StartContainer for \"8f565d3af28c2914f3913665e65d811b1bbaf466e34747b87bd1515af16f6f55\" returns successfully" Jan 29 11:57:42.023924 kubelet[2592]: I0129 11:57:42.023804 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65f599f856-ftjvl" podStartSLOduration=32.68555413 podStartE2EDuration="36.023745259s" podCreationTimestamp="2025-01-29 11:57:06 +0000 UTC" firstStartedPulling="2025-01-29 11:57:37.909534123 +0000 UTC m=+54.941592996" lastFinishedPulling="2025-01-29 11:57:41.247725252 +0000 UTC m=+58.279784125" observedRunningTime="2025-01-29 11:57:42.023043134 +0000 UTC m=+59.055102027" watchObservedRunningTime="2025-01-29 11:57:42.023745259 +0000 UTC m=+59.055804132" Jan 29 11:57:42.839444 containerd[1467]: time="2025-01-29T11:57:42.839376660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:42.840328 containerd[1467]: time="2025-01-29T11:57:42.840250926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:57:42.841533 containerd[1467]: time="2025-01-29T11:57:42.841488819Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:42.843523 containerd[1467]: time="2025-01-29T11:57:42.843496964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:42.844148 containerd[1467]: time="2025-01-29T11:57:42.844111626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.59612561s" Jan 29 11:57:42.844226 containerd[1467]: time="2025-01-29T11:57:42.844147818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:57:42.846217 containerd[1467]: time="2025-01-29T11:57:42.846183537Z" level=info msg="CreateContainer within sandbox \"9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:57:42.862397 containerd[1467]: time="2025-01-29T11:57:42.862276264Z" level=info msg="CreateContainer within sandbox \"9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"89c8d3f6a9f8c631b40b16a5d7872613f953afdf682ec1c0ec9e3d560fd4b8f8\"" Jan 29 11:57:42.863246 containerd[1467]: time="2025-01-29T11:57:42.863123827Z" level=info msg="StartContainer for \"89c8d3f6a9f8c631b40b16a5d7872613f953afdf682ec1c0ec9e3d560fd4b8f8\"" Jan 29 11:57:42.901388 systemd[1]: Started cri-containerd-89c8d3f6a9f8c631b40b16a5d7872613f953afdf682ec1c0ec9e3d560fd4b8f8.scope - libcontainer container 89c8d3f6a9f8c631b40b16a5d7872613f953afdf682ec1c0ec9e3d560fd4b8f8. Jan 29 11:57:42.936515 containerd[1467]: time="2025-01-29T11:57:42.936458547Z" level=info msg="StartContainer for \"89c8d3f6a9f8c631b40b16a5d7872613f953afdf682ec1c0ec9e3d560fd4b8f8\" returns successfully" Jan 29 11:57:42.977282 kubelet[2592]: I0129 11:57:42.976669 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jtgqv" podStartSLOduration=28.570515245 podStartE2EDuration="36.976649092s" podCreationTimestamp="2025-01-29 11:57:06 +0000 UTC" firstStartedPulling="2025-01-29 11:57:34.438693813 +0000 UTC m=+51.470752686" lastFinishedPulling="2025-01-29 11:57:42.84482766 +0000 UTC m=+59.876886533" observedRunningTime="2025-01-29 11:57:42.975905934 +0000 UTC m=+60.007964807" watchObservedRunningTime="2025-01-29 11:57:42.976649092 +0000 UTC m=+60.008707965" Jan 29 11:57:43.041454 containerd[1467]: time="2025-01-29T11:57:43.041410501Z" level=info msg="StopPodSandbox for \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\"" Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.081 [WARNING][5290] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0", GenerateName:"calico-apiserver-67b8685944-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fcf1af3-18c6-4f40-a2a4-51e333c1c84a", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b8685944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62", Pod:"calico-apiserver-67b8685944-q45lw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b41d6d01b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.081 [INFO][5290] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.081 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" iface="eth0" netns="" Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.081 [INFO][5290] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.081 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.103 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.103 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.103 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.108 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.108 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.109 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.116017 containerd[1467]: 2025-01-29 11:57:43.112 [INFO][5290] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:43.116017 containerd[1467]: time="2025-01-29T11:57:43.115560462Z" level=info msg="TearDown network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\" successfully" Jan 29 11:57:43.116017 containerd[1467]: time="2025-01-29T11:57:43.115580511Z" level=info msg="StopPodSandbox for \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\" returns successfully" Jan 29 11:57:43.122972 containerd[1467]: time="2025-01-29T11:57:43.122936975Z" level=info msg="RemovePodSandbox for \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\"" Jan 29 11:57:43.125195 containerd[1467]: time="2025-01-29T11:57:43.125148870Z" level=info msg="Forcibly stopping sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\"" Jan 29 11:57:43.138331 kubelet[2592]: I0129 11:57:43.138282 2592 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:57:43.138331 kubelet[2592]: I0129 11:57:43.138321 2592 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.253 [WARNING][5322] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0", GenerateName:"calico-apiserver-67b8685944-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fcf1af3-18c6-4f40-a2a4-51e333c1c84a", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b8685944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5af2d0193ab9817e69d2ca11cc31a45b9c1df08d3f8bd10b60d372dff049d62", Pod:"calico-apiserver-67b8685944-q45lw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b41d6d01b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.254 [INFO][5322] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.254 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" iface="eth0" netns="" Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.254 [INFO][5322] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.254 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.276 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.276 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.276 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.316 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.316 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" HandleID="k8s-pod-network.bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Workload="localhost-k8s-calico--apiserver--67b8685944--q45lw-eth0" Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.319 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.325297 containerd[1467]: 2025-01-29 11:57:43.322 [INFO][5322] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0" Jan 29 11:57:43.325910 containerd[1467]: time="2025-01-29T11:57:43.325367298Z" level=info msg="TearDown network for sandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\" successfully" Jan 29 11:57:43.364741 containerd[1467]: time="2025-01-29T11:57:43.364683541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:57:43.364869 containerd[1467]: time="2025-01-29T11:57:43.364774480Z" level=info msg="RemovePodSandbox \"bf781288cf0148553427fa13ccb984756da247b9af41dec3ef4dcb785897dff0\" returns successfully" Jan 29 11:57:43.365266 containerd[1467]: time="2025-01-29T11:57:43.365236692Z" level=info msg="StopPodSandbox for \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\"" Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.399 [WARNING][5357] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898", Pod:"coredns-7db6d8ff4d-5ltr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f283209828", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.399 [INFO][5357] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.399 [INFO][5357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" iface="eth0" netns="" Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.399 [INFO][5357] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.399 [INFO][5357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.421 [INFO][5364] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.421 [INFO][5364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.421 [INFO][5364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.426 [WARNING][5364] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.426 [INFO][5364] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.428 [INFO][5364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.433237 containerd[1467]: 2025-01-29 11:57:43.430 [INFO][5357] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:43.433237 containerd[1467]: time="2025-01-29T11:57:43.433199564Z" level=info msg="TearDown network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\" successfully" Jan 29 11:57:43.433237 containerd[1467]: time="2025-01-29T11:57:43.433231016Z" level=info msg="StopPodSandbox for \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\" returns successfully" Jan 29 11:57:43.434119 containerd[1467]: time="2025-01-29T11:57:43.433871620Z" level=info msg="RemovePodSandbox for \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\"" Jan 29 11:57:43.434119 containerd[1467]: time="2025-01-29T11:57:43.433896169Z" level=info msg="Forcibly stopping sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\"" Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.472 [WARNING][5387] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47b1acd2-0c77-4c5d-aa4f-cd4e87a15eb7", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37b4aec00acd8a919c778025fc70595491ae54fd1b11f1b0f3b4c3711e0e6898", Pod:"coredns-7db6d8ff4d-5ltr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f283209828", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.472 [INFO][5387] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.472 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" iface="eth0" netns="" Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.472 [INFO][5387] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.472 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.495 [INFO][5395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.495 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.495 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.500 [WARNING][5395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.500 [INFO][5395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" HandleID="k8s-pod-network.69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Workload="localhost-k8s-coredns--7db6d8ff4d--5ltr8-eth0" Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.501 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.506212 containerd[1467]: 2025-01-29 11:57:43.503 [INFO][5387] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2" Jan 29 11:57:43.506774 containerd[1467]: time="2025-01-29T11:57:43.506730955Z" level=info msg="TearDown network for sandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\" successfully" Jan 29 11:57:43.529131 containerd[1467]: time="2025-01-29T11:57:43.529098177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:57:43.529220 containerd[1467]: time="2025-01-29T11:57:43.529147704Z" level=info msg="RemovePodSandbox \"69c413ffc3f0f7bed5a7122b0e86bd4520a991fe9152472e342fe9cc4fa20db2\" returns successfully" Jan 29 11:57:43.529607 containerd[1467]: time="2025-01-29T11:57:43.529585698Z" level=info msg="StopPodSandbox for \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\"" Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.561 [WARNING][5417] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e64f57b-151d-456c-8ce1-abd59806b192", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f", Pod:"coredns-7db6d8ff4d-q8nvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali285289f0374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.562 [INFO][5417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.562 [INFO][5417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" iface="eth0" netns="" Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.562 [INFO][5417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.562 [INFO][5417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.581 [INFO][5424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.581 [INFO][5424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.581 [INFO][5424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.586 [WARNING][5424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.586 [INFO][5424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.587 [INFO][5424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.591653 containerd[1467]: 2025-01-29 11:57:43.589 [INFO][5417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:43.592085 containerd[1467]: time="2025-01-29T11:57:43.591685724Z" level=info msg="TearDown network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\" successfully" Jan 29 11:57:43.592085 containerd[1467]: time="2025-01-29T11:57:43.591710081Z" level=info msg="StopPodSandbox for \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\" returns successfully" Jan 29 11:57:43.592245 containerd[1467]: time="2025-01-29T11:57:43.592220058Z" level=info msg="RemovePodSandbox for \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\"" Jan 29 11:57:43.592287 containerd[1467]: time="2025-01-29T11:57:43.592254596Z" level=info msg="Forcibly stopping sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\"" Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.623 [WARNING][5446] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e64f57b-151d-456c-8ce1-abd59806b192", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"403e3ed9b522330fc742c8c9812dd4e53b13ffdb9bd90fd46e08d417e308cf9f", Pod:"coredns-7db6d8ff4d-q8nvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali285289f0374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.624 [INFO][5446] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.624 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" iface="eth0" netns="" Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.624 [INFO][5446] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.624 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.643 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.644 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.644 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.648 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.648 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" HandleID="k8s-pod-network.75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Workload="localhost-k8s-coredns--7db6d8ff4d--q8nvd-eth0" Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.649 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.653983 containerd[1467]: 2025-01-29 11:57:43.651 [INFO][5446] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a" Jan 29 11:57:43.654545 containerd[1467]: time="2025-01-29T11:57:43.654007276Z" level=info msg="TearDown network for sandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\" successfully" Jan 29 11:57:43.658402 containerd[1467]: time="2025-01-29T11:57:43.658340376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:57:43.658481 containerd[1467]: time="2025-01-29T11:57:43.658412879Z" level=info msg="RemovePodSandbox \"75abe51d1b1e9f1ece3317ab67db587514ad8712fc161f6152a6ad4e7ab83d1a\" returns successfully" Jan 29 11:57:43.658855 containerd[1467]: time="2025-01-29T11:57:43.658828970Z" level=info msg="StopPodSandbox for \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\"" Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.690 [WARNING][5477] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0", GenerateName:"calico-apiserver-67b8685944-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2c92084-f55b-4220-b802-d9b21f1f159e", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b8685944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba", Pod:"calico-apiserver-67b8685944-5dgzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb818898bda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.691 [INFO][5477] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.691 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" iface="eth0" netns="" Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.691 [INFO][5477] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.691 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.713 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.713 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.713 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.718 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.718 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.719 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.724732 containerd[1467]: 2025-01-29 11:57:43.722 [INFO][5477] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:43.724732 containerd[1467]: time="2025-01-29T11:57:43.724673383Z" level=info msg="TearDown network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\" successfully" Jan 29 11:57:43.724732 containerd[1467]: time="2025-01-29T11:57:43.724699354Z" level=info msg="StopPodSandbox for \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\" returns successfully" Jan 29 11:57:43.725405 containerd[1467]: time="2025-01-29T11:57:43.725105064Z" level=info msg="RemovePodSandbox for \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\"" Jan 29 11:57:43.725405 containerd[1467]: time="2025-01-29T11:57:43.725139673Z" level=info msg="Forcibly stopping sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\"" Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.758 [WARNING][5507] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0", GenerateName:"calico-apiserver-67b8685944-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2c92084-f55b-4220-b802-d9b21f1f159e", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b8685944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10fcc45e93c1e8b00eb8667032ea69cb9d5ccf29a3ca286abcc9cfe2aa3a7eba", Pod:"calico-apiserver-67b8685944-5dgzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb818898bda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.758 [INFO][5507] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.758 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" iface="eth0" netns="" Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.758 [INFO][5507] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.758 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.779 [INFO][5514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.779 [INFO][5514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.779 [INFO][5514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.784 [WARNING][5514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.784 [INFO][5514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" HandleID="k8s-pod-network.79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Workload="localhost-k8s-calico--apiserver--67b8685944--5dgzn-eth0" Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.785 [INFO][5514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.790321 containerd[1467]: 2025-01-29 11:57:43.788 [INFO][5507] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da" Jan 29 11:57:43.790765 containerd[1467]: time="2025-01-29T11:57:43.790368252Z" level=info msg="TearDown network for sandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\" successfully" Jan 29 11:57:43.794442 containerd[1467]: time="2025-01-29T11:57:43.794418643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:57:43.794500 containerd[1467]: time="2025-01-29T11:57:43.794472029Z" level=info msg="RemovePodSandbox \"79b8723e7c169a838887d8d2c9ca06a2d2e2ed8cecfe331c0a59b477215d50da\" returns successfully" Jan 29 11:57:43.794915 containerd[1467]: time="2025-01-29T11:57:43.794892698Z" level=info msg="StopPodSandbox for \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\"" Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.829 [WARNING][5536] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0", GenerateName:"calico-kube-controllers-65f599f856-", Namespace:"calico-system", SelfLink:"", UID:"34001644-e4ba-464a-bc44-9457515a4f0a", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f599f856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271", Pod:"calico-kube-controllers-65f599f856-ftjvl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7fefa4fea3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.830 [INFO][5536] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.830 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" iface="eth0" netns="" Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.830 [INFO][5536] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.830 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.849 [INFO][5543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.849 [INFO][5543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.849 [INFO][5543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.854 [WARNING][5543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.854 [INFO][5543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.855 [INFO][5543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.860253 containerd[1467]: 2025-01-29 11:57:43.858 [INFO][5536] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:43.861034 containerd[1467]: time="2025-01-29T11:57:43.860292575Z" level=info msg="TearDown network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\" successfully" Jan 29 11:57:43.861034 containerd[1467]: time="2025-01-29T11:57:43.860315971Z" level=info msg="StopPodSandbox for \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\" returns successfully" Jan 29 11:57:43.861034 containerd[1467]: time="2025-01-29T11:57:43.860788152Z" level=info msg="RemovePodSandbox for \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\"" Jan 29 11:57:43.861034 containerd[1467]: time="2025-01-29T11:57:43.860823512Z" level=info msg="Forcibly stopping sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\"" Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.891 [WARNING][5567] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0", GenerateName:"calico-kube-controllers-65f599f856-", Namespace:"calico-system", SelfLink:"", UID:"34001644-e4ba-464a-bc44-9457515a4f0a", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f599f856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d45f8be01e9594b40e2dc66c3c20a3994bca12a16e19251008aadaaa74ac8271", Pod:"calico-kube-controllers-65f599f856-ftjvl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7fefa4fea3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.892 [INFO][5567] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.892 [INFO][5567] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" iface="eth0" netns="" Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.892 [INFO][5567] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.892 [INFO][5567] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.915 [INFO][5574] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.915 [INFO][5574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.915 [INFO][5574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.921 [WARNING][5574] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.921 [INFO][5574] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" HandleID="k8s-pod-network.d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Workload="localhost-k8s-calico--kube--controllers--65f599f856--ftjvl-eth0" Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.922 [INFO][5574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:43.927417 containerd[1467]: 2025-01-29 11:57:43.925 [INFO][5567] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842" Jan 29 11:57:43.927915 containerd[1467]: time="2025-01-29T11:57:43.927469888Z" level=info msg="TearDown network for sandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\" successfully" Jan 29 11:57:43.931722 containerd[1467]: time="2025-01-29T11:57:43.931662600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:57:43.931722 containerd[1467]: time="2025-01-29T11:57:43.931713560Z" level=info msg="RemovePodSandbox \"d53939a35e75eb7b2ff0f3c75e663b15653b486cffb31795fe06f5657fa3d842\" returns successfully" Jan 29 11:57:43.932253 containerd[1467]: time="2025-01-29T11:57:43.932215992Z" level=info msg="StopPodSandbox for \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\"" Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.968 [WARNING][5596] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jtgqv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f", Pod:"csi-node-driver-jtgqv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec8f72200dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.968 [INFO][5596] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.968 [INFO][5596] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" iface="eth0" netns="" Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.968 [INFO][5596] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.968 [INFO][5596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.990 [INFO][5603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.990 [INFO][5603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.990 [INFO][5603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.996 [WARNING][5603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.996 [INFO][5603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.997 [INFO][5603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:44.003013 containerd[1467]: 2025-01-29 11:57:43.999 [INFO][5596] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:44.003656 containerd[1467]: time="2025-01-29T11:57:44.003056611Z" level=info msg="TearDown network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\" successfully" Jan 29 11:57:44.003656 containerd[1467]: time="2025-01-29T11:57:44.003087823Z" level=info msg="StopPodSandbox for \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\" returns successfully" Jan 29 11:57:44.003656 containerd[1467]: time="2025-01-29T11:57:44.003621646Z" level=info msg="RemovePodSandbox for \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\"" Jan 29 11:57:44.003656 containerd[1467]: time="2025-01-29T11:57:44.003645803Z" level=info msg="Forcibly stopping sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\"" Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.038 [WARNING][5625] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jtgqv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ff3da2e-f5a9-4f2e-9b75-df1775a2ff51", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9399d33c5f80940411f56cdb2bdbc943b1f4682465dbe97cd9fba948f628901f", Pod:"csi-node-driver-jtgqv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec8f72200dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.039 [INFO][5625] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.039 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" iface="eth0" netns="" Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.039 [INFO][5625] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.039 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.063 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.064 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.064 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.069 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.069 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" HandleID="k8s-pod-network.dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Workload="localhost-k8s-csi--node--driver--jtgqv-eth0" Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.070 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:57:44.076415 containerd[1467]: 2025-01-29 11:57:44.073 [INFO][5625] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7" Jan 29 11:57:44.076967 containerd[1467]: time="2025-01-29T11:57:44.076438913Z" level=info msg="TearDown network for sandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\" successfully" Jan 29 11:57:44.080376 containerd[1467]: time="2025-01-29T11:57:44.080349726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:57:44.080764 containerd[1467]: time="2025-01-29T11:57:44.080408342Z" level=info msg="RemovePodSandbox \"dcad76f34db3939641ccef389029571b470713495f35653b3f81acec0f3fb1f7\" returns successfully" Jan 29 11:57:44.999838 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:52068.service - OpenSSH per-connection server daemon (10.0.0.1:52068). Jan 29 11:57:45.045410 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 52068 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:45.047402 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:45.051734 systemd-logind[1452]: New session 16 of user core. Jan 29 11:57:45.059349 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:57:45.188361 sshd[5640]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:45.192447 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:52068.service: Deactivated successfully. Jan 29 11:57:45.194656 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:57:45.195379 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:57:45.196492 systemd-logind[1452]: Removed session 16. Jan 29 11:57:50.200353 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:52074.service - OpenSSH per-connection server daemon (10.0.0.1:52074). Jan 29 11:57:50.231903 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 52074 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:50.233792 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:50.237734 systemd-logind[1452]: New session 17 of user core. Jan 29 11:57:50.247410 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:57:50.363030 sshd[5677]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:50.368541 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:52074.service: Deactivated successfully. Jan 29 11:57:50.371229 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:57:50.372121 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:57:50.373457 systemd-logind[1452]: Removed session 17. Jan 29 11:57:53.062570 kubelet[2592]: E0129 11:57:53.062472 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:55.376095 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:56672.service - OpenSSH per-connection server daemon (10.0.0.1:56672). Jan 29 11:57:55.409961 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:55.411836 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:55.416014 systemd-logind[1452]: New session 18 of user core. Jan 29 11:57:55.429510 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:57:55.545653 sshd[5697]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:55.557042 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:56672.service: Deactivated successfully. Jan 29 11:57:55.559600 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:57:55.561688 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:57:55.568665 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:56678.service - OpenSSH per-connection server daemon (10.0.0.1:56678). Jan 29 11:57:55.569863 systemd-logind[1452]: Removed session 18. Jan 29 11:57:55.595830 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 56678 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:55.597548 sshd[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:55.601723 systemd-logind[1452]: New session 19 of user core. Jan 29 11:57:55.612409 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:57:55.874658 sshd[5711]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:55.881992 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:56678.service: Deactivated successfully. Jan 29 11:57:55.884112 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:57:55.885755 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:57:55.887322 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:56686.service - OpenSSH per-connection server daemon (10.0.0.1:56686). Jan 29 11:57:55.888265 systemd-logind[1452]: Removed session 19. Jan 29 11:57:55.922057 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 56686 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:55.923677 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:55.927997 systemd-logind[1452]: New session 20 of user core. Jan 29 11:57:55.937333 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:57:57.539662 sshd[5723]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:57.551707 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:56686.service: Deactivated successfully. Jan 29 11:57:57.554110 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:57:57.555216 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:57:57.568067 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:56698.service - OpenSSH per-connection server daemon (10.0.0.1:56698). Jan 29 11:57:57.569834 systemd-logind[1452]: Removed session 20. Jan 29 11:57:57.600701 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 56698 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:57.602727 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:57.608131 systemd-logind[1452]: New session 21 of user core. Jan 29 11:57:57.620521 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:57:57.885176 sshd[5747]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:57.897998 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:56698.service: Deactivated successfully. Jan 29 11:57:57.900430 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:57:57.903119 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:57:57.912554 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:56714.service - OpenSSH per-connection server daemon (10.0.0.1:56714). Jan 29 11:57:57.913802 systemd-logind[1452]: Removed session 21. Jan 29 11:57:57.943426 sshd[5762]: Accepted publickey for core from 10.0.0.1 port 56714 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:57:57.945696 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:57.951546 systemd-logind[1452]: New session 22 of user core. Jan 29 11:57:57.972955 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:57:58.171285 sshd[5762]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:58.175295 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:56714.service: Deactivated successfully. Jan 29 11:57:58.177586 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:57:58.178228 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:57:58.179236 systemd-logind[1452]: Removed session 22. Jan 29 11:57:59.906103 kubelet[2592]: I0129 11:57:59.905973 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:58:03.186918 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:38914.service - OpenSSH per-connection server daemon (10.0.0.1:38914). Jan 29 11:58:03.225433 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 38914 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:58:03.227260 sshd[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:03.232013 systemd-logind[1452]: New session 23 of user core. Jan 29 11:58:03.240375 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:58:03.376226 sshd[5780]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:03.380785 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:38914.service: Deactivated successfully. Jan 29 11:58:03.383051 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:58:03.383709 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:58:03.384725 systemd-logind[1452]: Removed session 23. Jan 29 11:58:08.396493 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:38926.service - OpenSSH per-connection server daemon (10.0.0.1:38926). Jan 29 11:58:08.410416 systemd[1]: run-containerd-runc-k8s.io-06fac806982fab5dc17b80997f75f08cc4ca7947de20407e89aca0a7aa3eed47-runc.qFla1p.mount: Deactivated successfully. Jan 29 11:58:08.438813 sshd[5802]: Accepted publickey for core from 10.0.0.1 port 38926 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:58:08.440839 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:08.445091 systemd-logind[1452]: New session 24 of user core. Jan 29 11:58:08.454604 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:58:08.489057 kubelet[2592]: E0129 11:58:08.489019 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:08.575713 sshd[5802]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:08.580794 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:38926.service: Deactivated successfully. Jan 29 11:58:08.583421 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:58:08.584056 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:58:08.585259 systemd-logind[1452]: Removed session 24. Jan 29 11:58:13.598489 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:34884.service - OpenSSH per-connection server daemon (10.0.0.1:34884). Jan 29 11:58:13.640135 sshd[5840]: Accepted publickey for core from 10.0.0.1 port 34884 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:58:13.642301 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:13.648288 systemd-logind[1452]: New session 25 of user core. Jan 29 11:58:13.656321 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:58:13.778997 sshd[5840]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:13.784411 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:58:13.784970 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:34884.service: Deactivated successfully. Jan 29 11:58:13.788363 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:58:13.790109 systemd-logind[1452]: Removed session 25. Jan 29 11:58:16.062323 kubelet[2592]: E0129 11:58:16.062278 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:18.793263 systemd[1]: Started sshd@25-10.0.0.115:22-10.0.0.1:34900.service - OpenSSH per-connection server daemon (10.0.0.1:34900). Jan 29 11:58:18.826825 sshd[5879]: Accepted publickey for core from 10.0.0.1 port 34900 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:58:18.829081 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:18.837334 systemd-logind[1452]: New session 26 of user core. Jan 29 11:58:18.850349 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:58:18.979927 sshd[5879]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:18.985905 systemd[1]: sshd@25-10.0.0.115:22-10.0.0.1:34900.service: Deactivated successfully. Jan 29 11:58:18.989020 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:58:18.990328 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:58:18.991602 systemd-logind[1452]: Removed session 26. Jan 29 11:58:19.364138 kubelet[2592]: I0129 11:58:19.364050 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"