Jan 17 12:06:14.937172 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:06:14.937212 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:14.937236 kernel: BIOS-provided physical RAM map: Jan 17 12:06:14.937251 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 12:06:14.937257 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 12:06:14.937263 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 12:06:14.937271 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 12:06:14.937278 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 12:06:14.937284 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 12:06:14.937291 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 12:06:14.937300 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 12:06:14.937306 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 12:06:14.937316 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 12:06:14.937323 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 12:06:14.937333 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 12:06:14.937340 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 12:06:14.937350 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 12:06:14.937357 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 12:06:14.937364 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 12:06:14.937371 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:06:14.937378 kernel: NX (Execute Disable) protection: active Jan 17 12:06:14.937385 kernel: APIC: Static calls initialized Jan 17 12:06:14.937392 kernel: efi: EFI v2.7 by EDK II Jan 17 12:06:14.937399 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 17 12:06:14.937406 kernel: SMBIOS 2.8 present. Jan 17 12:06:14.937413 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 12:06:14.937420 kernel: Hypervisor detected: KVM Jan 17 12:06:14.937429 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:06:14.937436 kernel: kvm-clock: using sched offset of 6222564326 cycles Jan 17 12:06:14.937444 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:06:14.937451 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:06:14.937458 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:06:14.937466 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:06:14.937473 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 12:06:14.937481 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 12:06:14.937488 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:06:14.937497 kernel: Using GB pages for direct mapping Jan 17 12:06:14.937505 kernel: Secure boot disabled Jan 17 12:06:14.937522 kernel: ACPI: Early table checksum verification disabled Jan 17 12:06:14.937530 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 12:06:14.937551 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:06:14.937559 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:06:14.937567 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:06:14.937576 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 12:06:14.937584 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:06:14.937594 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:06:14.937602 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:06:14.937609 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:06:14.937616 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 12:06:14.937624 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 12:06:14.937634 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 17 12:06:14.937642 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 12:06:14.937649 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 12:06:14.937663 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 12:06:14.937671 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 12:06:14.937678 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 12:06:14.937686 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 12:06:14.937693 kernel: No NUMA configuration found Jan 17 12:06:14.937703 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 12:06:14.937713 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 12:06:14.937720 kernel: Zone ranges: Jan 17 12:06:14.937728 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:06:14.937735 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 12:06:14.937742 kernel: Normal empty Jan 17 12:06:14.937772 kernel: Movable zone start for each node Jan 17 12:06:14.937780 kernel: Early memory node ranges Jan 17 12:06:14.937788 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 12:06:14.937818 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 12:06:14.937826 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 12:06:14.937837 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 12:06:14.937844 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 12:06:14.937851 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 12:06:14.937862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 12:06:14.937869 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:06:14.937877 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 12:06:14.937884 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 12:06:14.937892 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:06:14.937899 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 12:06:14.937909 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 12:06:14.937916 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 12:06:14.937924 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:06:14.937931 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:06:14.937938 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:06:14.937946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:06:14.937953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:06:14.937961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:06:14.937968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:06:14.937978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:06:14.937985 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:06:14.937992 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:06:14.938000 kernel: TSC deadline timer available Jan 17 12:06:14.938007 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:06:14.938015 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:06:14.938022 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:06:14.938029 kernel: kvm-guest: setup PV sched yield Jan 17 12:06:14.938037 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:06:14.938046 kernel: Booting paravirtualized kernel on KVM Jan 17 12:06:14.938054 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:06:14.938061 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:06:14.938069 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:06:14.938076 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:06:14.938084 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:06:14.938091 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:06:14.938098 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:06:14.938107 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:14.938120 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:06:14.938127 kernel: random: crng init done Jan 17 12:06:14.938134 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:06:14.938142 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:06:14.938149 kernel: Fallback order for Node 0: 0 Jan 17 12:06:14.938157 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 12:06:14.938164 kernel: Policy zone: DMA32 Jan 17 12:06:14.938171 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:06:14.938181 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 171124K reserved, 0K cma-reserved) Jan 17 12:06:14.938189 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:06:14.938196 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:06:14.938204 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:06:14.938211 kernel: Dynamic Preempt: voluntary Jan 17 12:06:14.938227 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:06:14.938237 kernel: rcu: RCU event tracing is enabled. Jan 17 12:06:14.938245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:06:14.938253 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:06:14.938261 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:06:14.938268 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:06:14.938276 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:06:14.938286 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:06:14.938294 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:06:14.938304 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:06:14.938312 kernel: Console: colour dummy device 80x25 Jan 17 12:06:14.938319 kernel: printk: console [ttyS0] enabled Jan 17 12:06:14.938329 kernel: ACPI: Core revision 20230628 Jan 17 12:06:14.938337 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:06:14.938345 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:06:14.938353 kernel: x2apic enabled Jan 17 12:06:14.938360 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:06:14.938368 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:06:14.938376 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:06:14.938383 kernel: kvm-guest: setup PV IPIs Jan 17 12:06:14.938391 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:06:14.938401 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:06:14.938409 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:06:14.938417 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:06:14.938424 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:06:14.938432 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:06:14.938440 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:06:14.938448 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:06:14.938455 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:06:14.938463 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:06:14.938473 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:06:14.938481 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:06:14.938489 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:06:14.938497 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:06:14.938507 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:06:14.938515 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:06:14.938523 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:06:14.938531 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:06:14.938541 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:06:14.938549 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:06:14.938556 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:06:14.938564 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:06:14.938572 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:06:14.938580 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:06:14.938588 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:06:14.938595 kernel: landlock: Up and running. Jan 17 12:06:14.938603 kernel: SELinux: Initializing. Jan 17 12:06:14.938613 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:06:14.938621 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:06:14.938629 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:06:14.938637 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:06:14.938645 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:06:14.938659 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:06:14.938667 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:06:14.938675 kernel: ... version: 0 Jan 17 12:06:14.938694 kernel: ... bit width: 48 Jan 17 12:06:14.938729 kernel: ... generic registers: 6 Jan 17 12:06:14.938738 kernel: ... value mask: 0000ffffffffffff Jan 17 12:06:14.938761 kernel: ... max period: 00007fffffffffff Jan 17 12:06:14.938770 kernel: ... fixed-purpose events: 0 Jan 17 12:06:14.938778 kernel: ... event mask: 000000000000003f Jan 17 12:06:14.938785 kernel: signal: max sigframe size: 1776 Jan 17 12:06:14.938832 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:06:14.938841 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:06:14.938848 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:06:14.938865 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:06:14.938873 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:06:14.938880 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:06:14.938888 kernel: smpboot: Max logical packages: 1 Jan 17 12:06:14.938896 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:06:14.938904 kernel: devtmpfs: initialized Jan 17 12:06:14.938911 kernel: x86/mm: Memory block size: 128MB Jan 17 12:06:14.938919 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 12:06:14.938927 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 12:06:14.938938 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 12:06:14.938946 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 12:06:14.938954 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 12:06:14.938962 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:06:14.938970 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:06:14.938978 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:06:14.938986 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:06:14.938993 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:06:14.939001 kernel: audit: type=2000 audit(1737115574.033:1): state=initialized audit_enabled=0 res=1 Jan 17 12:06:14.939011 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:06:14.939019 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:06:14.939027 kernel: cpuidle: using governor menu Jan 17 12:06:14.939034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:06:14.939042 kernel: dca service started, version 1.12.1 Jan 17 12:06:14.939050 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:06:14.939058 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:06:14.939066 kernel: PCI: Using configuration type 1 for base access Jan 17 12:06:14.939073 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:06:14.939084 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:06:14.939091 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:06:14.939099 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:06:14.939107 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:06:14.939115 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:06:14.939122 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:06:14.939130 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:06:14.939138 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:06:14.939146 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:06:14.939156 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:06:14.939163 kernel: ACPI: Interpreter enabled Jan 17 12:06:14.939171 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:06:14.939179 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:06:14.939187 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:06:14.939194 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:06:14.939202 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:06:14.939210 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:06:14.939421 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:06:14.939560 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:06:14.939695 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:06:14.939706 kernel: PCI host bridge to bus 0000:00 Jan 17 12:06:14.939882 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:06:14.940060 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:06:14.940188 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:06:14.940330 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:06:14.940448 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:06:14.940563 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 12:06:14.940692 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:06:14.940863 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:06:14.941010 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:06:14.941143 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 12:06:14.941269 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 12:06:14.941393 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 12:06:14.941520 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 12:06:14.941645 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:06:14.941832 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:06:14.941963 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 12:06:14.942095 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 12:06:14.942220 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 12:06:14.942368 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:06:14.942496 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 12:06:14.942621 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 12:06:14.942775 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 12:06:14.942937 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:06:14.943072 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 12:06:14.943199 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 12:06:14.943325 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 12:06:14.943453 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 12:06:14.943601 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:06:14.943738 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:06:14.943949 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:06:14.944084 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 12:06:14.944208 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 12:06:14.944353 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:06:14.944479 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 12:06:14.944490 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:06:14.944498 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:06:14.944506 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:06:14.944518 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:06:14.944526 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:06:14.944534 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:06:14.944542 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:06:14.944550 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:06:14.944558 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:06:14.944565 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:06:14.944573 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:06:14.944581 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:06:14.944591 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:06:14.944599 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:06:14.944607 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:06:14.944615 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:06:14.944622 kernel: iommu: Default domain type: Translated Jan 17 12:06:14.944630 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:06:14.944638 kernel: efivars: Registered efivars operations Jan 17 12:06:14.944646 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:06:14.944662 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:06:14.944672 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 12:06:14.944680 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 12:06:14.944688 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 12:06:14.944696 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 12:06:14.944840 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:06:14.944967 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:06:14.945103 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:06:14.945114 kernel: vgaarb: loaded Jan 17 12:06:14.945122 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:06:14.945135 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:06:14.945143 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:06:14.945150 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:06:14.945158 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:06:14.945166 kernel: pnp: PnP ACPI init Jan 17 12:06:14.945315 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:06:14.945327 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:06:14.945335 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:06:14.945347 kernel: NET: Registered PF_INET protocol family Jan 17 12:06:14.945356 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:06:14.945364 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:06:14.945372 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:06:14.945380 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:06:14.945388 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:06:14.945396 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:06:14.945403 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:06:14.945411 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:06:14.945422 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:06:14.945430 kernel: NET: Registered PF_XDP protocol family Jan 17 12:06:14.945558 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 12:06:14.945693 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 12:06:14.945834 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:06:14.945953 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:06:14.946067 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:06:14.946183 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:06:14.946304 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:06:14.946420 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 12:06:14.946430 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:06:14.946438 kernel: Initialise system trusted keyrings Jan 17 12:06:14.946446 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:06:14.946454 kernel: Key type asymmetric registered Jan 17 12:06:14.946462 kernel: Asymmetric key parser 'x509' registered Jan 17 12:06:14.946469 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:06:14.946481 kernel: io scheduler mq-deadline registered Jan 17 12:06:14.946489 kernel: io scheduler kyber registered Jan 17 12:06:14.946497 kernel: io scheduler bfq registered Jan 17 12:06:14.946504 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:06:14.946513 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:06:14.946521 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:06:14.946528 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:06:14.946536 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:06:14.946544 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:06:14.946555 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:06:14.946563 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:06:14.946571 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:06:14.946733 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:06:14.946746 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:06:14.946940 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:06:14.947060 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:06:14 UTC (1737115574) Jan 17 12:06:14.947177 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:06:14.947192 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:06:14.947200 kernel: efifb: probing for efifb Jan 17 12:06:14.947208 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 12:06:14.947216 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 12:06:14.947223 kernel: efifb: scrolling: redraw Jan 17 12:06:14.947231 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 12:06:14.947239 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 12:06:14.947264 kernel: fb0: EFI VGA frame buffer device Jan 17 12:06:14.947275 kernel: pstore: Using crash dump compression: deflate Jan 17 12:06:14.947285 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:06:14.947293 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:06:14.947301 kernel: Segment Routing with IPv6 Jan 17 12:06:14.947310 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:06:14.947318 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:06:14.947326 kernel: Key type dns_resolver registered Jan 17 12:06:14.947334 kernel: IPI shorthand broadcast: enabled Jan 17 12:06:14.947342 kernel: sched_clock: Marking stable (1000002845, 117284798)->(1139299613, -22011970) Jan 17 12:06:14.947350 kernel: registered taskstats version 1 Jan 17 12:06:14.947361 kernel: Loading compiled-in X.509 certificates Jan 17 12:06:14.947370 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:06:14.947378 kernel: Key type .fscrypt registered Jan 17 12:06:14.947386 kernel: Key type fscrypt-provisioning registered Jan 17 12:06:14.947394 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:06:14.947402 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:06:14.947410 kernel: ima: No architecture policies found Jan 17 12:06:14.947419 kernel: clk: Disabling unused clocks Jan 17 12:06:14.947427 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:06:14.947437 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:06:14.947446 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:06:14.947454 kernel: Run /init as init process Jan 17 12:06:14.947462 kernel: with arguments: Jan 17 12:06:14.947470 kernel: /init Jan 17 12:06:14.947478 kernel: with environment: Jan 17 12:06:14.947486 kernel: HOME=/ Jan 17 12:06:14.947494 kernel: TERM=linux Jan 17 12:06:14.947502 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:06:14.947514 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:06:14.947524 systemd[1]: Detected virtualization kvm. Jan 17 12:06:14.947533 systemd[1]: Detected architecture x86-64. Jan 17 12:06:14.947541 systemd[1]: Running in initrd. Jan 17 12:06:14.947555 systemd[1]: No hostname configured, using default hostname. Jan 17 12:06:14.947563 systemd[1]: Hostname set to . Jan 17 12:06:14.947572 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:06:14.947581 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:06:14.947589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:14.947600 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:14.947610 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:06:14.947621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:06:14.947632 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:06:14.947641 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:06:14.947651 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:06:14.947668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:06:14.947677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:14.947686 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:14.947694 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:06:14.947706 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:06:14.947714 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:06:14.947723 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:06:14.947731 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:06:14.947740 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:06:14.947749 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:06:14.947757 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:06:14.947766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:14.947778 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:14.947786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:14.947819 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:06:14.947828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:06:14.947837 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:06:14.947846 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:06:14.947854 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:06:14.947863 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:06:14.947872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:06:14.947884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:14.947893 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:06:14.947902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:14.947910 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:06:14.947939 systemd-journald[192]: Collecting audit messages is disabled. Jan 17 12:06:14.947961 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:06:14.947970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:14.947980 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:14.947991 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:14.948000 systemd-journald[192]: Journal started Jan 17 12:06:14.948018 systemd-journald[192]: Runtime Journal (/run/log/journal/5550f2455bd946209559d76b32497c7c) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:06:14.937570 systemd-modules-load[194]: Inserted module 'overlay' Jan 17 12:06:14.950069 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:06:14.951812 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:06:14.956678 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:06:14.968918 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:06:14.968950 kernel: Bridge firewalling registered Jan 17 12:06:14.969026 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 17 12:06:14.970747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:14.978992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:06:14.979682 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:14.981651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:14.984293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:14.988101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:06:14.998673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:15.006967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:06:15.009967 dracut-cmdline[226]: dracut-dracut-053 Jan 17 12:06:15.011230 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:15.036553 systemd-resolved[233]: Positive Trust Anchors: Jan 17 12:06:15.036570 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:06:15.036601 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:06:15.039255 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 17 12:06:15.040527 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:06:15.046632 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:15.101854 kernel: SCSI subsystem initialized Jan 17 12:06:15.110853 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:06:15.121828 kernel: iscsi: registered transport (tcp) Jan 17 12:06:15.143842 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:06:15.143906 kernel: QLogic iSCSI HBA Driver Jan 17 12:06:15.200572 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:06:15.210149 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:06:15.239861 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:06:15.239913 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:06:15.241103 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:06:15.283838 kernel: raid6: avx2x4 gen() 29852 MB/s Jan 17 12:06:15.300840 kernel: raid6: avx2x2 gen() 30599 MB/s Jan 17 12:06:15.317934 kernel: raid6: avx2x1 gen() 25561 MB/s Jan 17 12:06:15.318028 kernel: raid6: using algorithm avx2x2 gen() 30599 MB/s Jan 17 12:06:15.335919 kernel: raid6: .... xor() 19792 MB/s, rmw enabled Jan 17 12:06:15.335975 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:06:15.358845 kernel: xor: automatically using best checksumming function avx Jan 17 12:06:15.517860 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:06:15.533438 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:15.544990 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:15.556893 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 17 12:06:15.561672 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:15.570007 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:06:15.584382 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 17 12:06:15.620695 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:15.638948 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:06:15.708287 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:15.719993 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:06:15.739891 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:15.743382 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:15.746807 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:15.749534 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:06:15.752819 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:06:15.772248 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:06:15.772436 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:06:15.772449 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:06:15.772461 kernel: GPT:9289727 != 19775487 Jan 17 12:06:15.772471 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:06:15.772481 kernel: GPT:9289727 != 19775487 Jan 17 12:06:15.772491 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:06:15.772511 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:06:15.772522 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:06:15.772533 kernel: AES CTR mode by8 optimization enabled Jan 17 12:06:15.765258 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:06:15.785272 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:15.791814 kernel: libata version 3.00 loaded. Jan 17 12:06:15.804994 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:06:15.838204 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:06:15.838223 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:06:15.838385 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:06:15.838530 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (476) Jan 17 12:06:15.838542 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (457) Jan 17 12:06:15.838554 kernel: scsi host0: ahci Jan 17 12:06:15.838737 kernel: scsi host1: ahci Jan 17 12:06:15.838925 kernel: scsi host2: ahci Jan 17 12:06:15.839078 kernel: scsi host3: ahci Jan 17 12:06:15.839437 kernel: scsi host4: ahci Jan 17 12:06:15.839688 kernel: scsi host5: ahci Jan 17 12:06:15.839868 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 12:06:15.839880 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 12:06:15.839895 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 12:06:15.839905 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 12:06:15.839916 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 12:06:15.839926 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 12:06:15.804455 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:15.804609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:15.806355 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:15.808525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:15.808675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:15.810094 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:15.818256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:15.838105 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:06:15.848579 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:06:15.853666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:06:15.854325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:15.860730 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:06:15.862047 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:06:15.879921 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:06:15.881930 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:15.888224 disk-uuid[559]: Primary Header is updated. Jan 17 12:06:15.888224 disk-uuid[559]: Secondary Entries is updated. Jan 17 12:06:15.888224 disk-uuid[559]: Secondary Header is updated. Jan 17 12:06:15.891819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:06:15.896834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:06:15.910728 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:16.151223 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:06:16.151330 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:06:16.151345 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:06:16.151359 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:06:16.152835 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:06:16.153832 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:06:16.153922 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:06:16.154885 kernel: ata3.00: applying bridge limits Jan 17 12:06:16.154942 kernel: ata3.00: configured for UDMA/100 Jan 17 12:06:16.155822 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:06:16.207830 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:06:16.221895 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:06:16.221916 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:06:16.900874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:06:16.901142 disk-uuid[561]: The operation has completed successfully. Jan 17 12:06:16.957608 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:06:16.957852 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:06:16.991984 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:06:17.001351 sh[593]: Success Jan 17 12:06:17.036825 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:06:17.100416 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:06:17.114481 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:06:17.117527 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:06:17.134415 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:06:17.134450 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:17.134461 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:06:17.135447 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:06:17.136216 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:06:17.141401 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:06:17.143911 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:06:17.154006 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:06:17.156331 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:06:17.166074 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:17.166111 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:17.166126 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:06:17.170827 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:06:17.180051 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:06:17.182833 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:17.192115 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:06:17.203036 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:06:17.268965 ignition[685]: Ignition 2.19.0 Jan 17 12:06:17.268982 ignition[685]: Stage: fetch-offline Jan 17 12:06:17.269028 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:17.269044 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:06:17.269169 ignition[685]: parsed url from cmdline: "" Jan 17 12:06:17.269175 ignition[685]: no config URL provided Jan 17 12:06:17.269183 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:06:17.269198 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:06:17.269240 ignition[685]: op(1): [started] loading QEMU firmware config module Jan 17 12:06:17.269247 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:06:17.278274 ignition[685]: op(1): [finished] loading QEMU firmware config module Jan 17 12:06:17.278310 ignition[685]: QEMU firmware config was not found. Ignoring... Jan 17 12:06:17.305908 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:17.313975 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:06:17.325668 ignition[685]: parsing config with SHA512: 97a654cbee8af714123026c42cd939dd8e3e6c72b09c5adbcb233599822e48936a45d539fd7279670c0d8ac752dd604b1976933b873ae36ec0b2a504e081805b Jan 17 12:06:17.329970 unknown[685]: fetched base config from "system" Jan 17 12:06:17.330502 unknown[685]: fetched user config from "qemu" Jan 17 12:06:17.331031 ignition[685]: fetch-offline: fetch-offline passed Jan 17 12:06:17.331124 ignition[685]: Ignition finished successfully Jan 17 12:06:17.334258 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:17.339189 systemd-networkd[783]: lo: Link UP Jan 17 12:06:17.339200 systemd-networkd[783]: lo: Gained carrier Jan 17 12:06:17.341037 systemd-networkd[783]: Enumeration completed Jan 17 12:06:17.341161 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:06:17.341471 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:17.341475 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:06:17.343547 systemd[1]: Reached target network.target - Network. Jan 17 12:06:17.343743 systemd-networkd[783]: eth0: Link UP Jan 17 12:06:17.343747 systemd-networkd[783]: eth0: Gained carrier Jan 17 12:06:17.343754 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:17.345675 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:06:17.352962 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:06:17.360867 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:06:17.368414 ignition[787]: Ignition 2.19.0 Jan 17 12:06:17.368428 ignition[787]: Stage: kargs Jan 17 12:06:17.368607 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:17.368618 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:06:17.369428 ignition[787]: kargs: kargs passed Jan 17 12:06:17.369487 ignition[787]: Ignition finished successfully Jan 17 12:06:17.373262 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:06:17.384960 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:06:17.402581 ignition[797]: Ignition 2.19.0 Jan 17 12:06:17.402604 ignition[797]: Stage: disks Jan 17 12:06:17.402844 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:17.402861 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:06:17.403887 ignition[797]: disks: disks passed Jan 17 12:06:17.406527 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:06:17.403950 ignition[797]: Ignition finished successfully Jan 17 12:06:17.408401 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:17.410346 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:06:17.411616 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:06:17.413370 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:06:17.415447 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:06:17.426927 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:06:17.441080 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:06:17.448608 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:06:17.459978 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:06:17.711834 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:06:17.712951 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:06:17.714196 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:06:17.721875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:17.723355 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:06:17.726191 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:06:17.726240 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:06:17.734893 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Jan 17 12:06:17.726265 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:17.740440 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:17.740467 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:17.740495 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:06:17.731366 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:06:17.735968 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:06:17.744808 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:06:17.746972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:17.779325 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:06:17.785026 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:06:17.790238 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:06:17.795358 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:06:17.902701 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:17.911918 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:06:17.921836 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:06:17.925824 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:17.954451 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:06:17.977397 ignition[930]: INFO : Ignition 2.19.0 Jan 17 12:06:17.977397 ignition[930]: INFO : Stage: mount Jan 17 12:06:17.979309 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:17.979309 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:06:17.979309 ignition[930]: INFO : mount: mount passed Jan 17 12:06:17.979309 ignition[930]: INFO : Ignition finished successfully Jan 17 12:06:17.997383 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:06:18.015877 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:06:18.133683 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:06:18.142035 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:18.151816 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Jan 17 12:06:18.153862 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:18.153881 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:18.153892 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:06:18.156820 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:06:18.158486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:18.191215 ignition[957]: INFO : Ignition 2.19.0 Jan 17 12:06:18.191215 ignition[957]: INFO : Stage: files Jan 17 12:06:18.193228 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:18.193228 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:06:18.195739 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:06:18.197755 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:06:18.197755 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:06:18.201902 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:06:18.203401 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:06:18.203401 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:06:18.202774 unknown[957]: wrote ssh authorized keys file for user: core Jan 17 12:06:18.207322 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:06:18.207322 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:06:18.268820 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:06:18.489758 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:06:18.489758 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:18.493586 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:18.493586 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:18.497696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:18.497696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:18.497696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:18.497696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:18.505036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:18.505036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:18.505036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:18.505036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:06:18.505036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:06:18.505036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:06:18.505036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 17 12:06:18.878431 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:06:19.072309 systemd-networkd[783]: eth0: Gained IPv6LL Jan 17 12:06:20.075868 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:06:20.075868 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:06:20.080554 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:20.080554 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:20.080554 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:06:20.080554 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 12:06:20.080554 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:06:20.080554 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:06:20.080554 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 12:06:20.080554 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:06:20.108914 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:06:20.152262 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:06:20.154149 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:06:20.154149 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:20.157044 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:20.158615 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:20.160479 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:20.162245 ignition[957]: INFO : files: files passed Jan 17 12:06:20.162245 ignition[957]: INFO : Ignition finished successfully Jan 17 12:06:20.165972 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:06:20.179019 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:06:20.182125 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:06:20.186586 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:06:20.186746 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:06:20.218127 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:06:20.222548 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:20.222548 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:20.226304 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:20.227934 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:20.229318 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:06:20.241017 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:06:20.275434 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:06:20.275617 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:06:20.278645 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:06:20.281059 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:06:20.283661 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:06:20.284711 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:06:20.306940 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:20.326183 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:06:20.337501 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:20.339088 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:20.339407 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:06:20.339769 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:06:20.339915 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:20.340836 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:06:20.341152 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:06:20.341558 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:06:20.342132 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:20.342500 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:20.342888 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:06:20.343245 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:20.343641 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:06:20.344210 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:06:20.344557 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:06:20.345062 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:06:20.345221 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:20.345981 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:20.346331 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:20.346644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:06:20.346760 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:20.347147 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:06:20.410099 ignition[1012]: INFO : Ignition 2.19.0 Jan 17 12:06:20.410099 ignition[1012]: INFO : Stage: umount Jan 17 12:06:20.410099 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:20.410099 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:06:20.410099 ignition[1012]: INFO : umount: umount passed Jan 17 12:06:20.410099 ignition[1012]: INFO : Ignition finished successfully Jan 17 12:06:20.347282 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:20.347995 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:06:20.348136 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:20.348586 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:06:20.349147 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:06:20.350884 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:20.351294 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:06:20.351626 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:06:20.352123 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:06:20.352259 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:06:20.352663 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:06:20.352781 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:06:20.353156 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:06:20.353299 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:20.353841 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:06:20.353973 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:06:20.387164 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:06:20.410219 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:06:20.411356 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:06:20.411544 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:20.413687 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:06:20.413867 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:20.418323 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:06:20.418469 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:06:20.420369 systemd[1]: Stopped target network.target - Network. Jan 17 12:06:20.423427 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:06:20.423520 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:06:20.427277 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:06:20.427351 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:06:20.429540 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:06:20.429607 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:06:20.431888 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:06:20.431955 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:06:20.434695 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:06:20.436700 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:06:20.455857 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 17 12:06:20.456728 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:06:20.457595 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:06:20.457720 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:06:20.461832 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:06:20.462009 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:06:20.477452 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:06:20.477660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:06:20.482993 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:06:20.483058 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:20.489921 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:06:20.491156 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:06:20.491237 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:20.497127 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:06:20.497205 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:20.499403 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:06:20.499472 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:20.501883 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:06:20.501950 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:20.503546 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:20.516051 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:06:20.516201 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:06:20.533756 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:06:20.533995 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:20.536385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:06:20.536439 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:20.538243 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:06:20.538285 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:20.540165 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:06:20.540219 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:20.542630 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:06:20.542688 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:06:20.544548 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:20.544602 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:20.562666 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:06:20.564601 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:06:20.564699 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:20.567090 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:06:20.567171 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:20.569404 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:06:20.569464 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:20.570187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:20.570259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:20.572863 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:06:20.573018 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:06:20.761277 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:06:20.761447 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:06:20.763749 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:06:20.776655 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:06:20.776784 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:20.797201 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:06:20.806552 systemd[1]: Switching root. Jan 17 12:06:20.839943 systemd-journald[192]: Journal stopped Jan 17 12:06:22.093739 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 17 12:06:22.093824 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:06:22.093844 kernel: SELinux: policy capability open_perms=1 Jan 17 12:06:22.093855 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:06:22.093876 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:06:22.093893 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:06:22.093910 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:06:22.093922 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:06:22.093933 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:06:22.093945 kernel: audit: type=1403 audit(1737115581.260:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:06:22.093962 systemd[1]: Successfully loaded SELinux policy in 48.841ms. Jan 17 12:06:22.093977 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.993ms. Jan 17 12:06:22.093990 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:06:22.094005 systemd[1]: Detected virtualization kvm. Jan 17 12:06:22.094018 systemd[1]: Detected architecture x86-64. Jan 17 12:06:22.094030 systemd[1]: Detected first boot. Jan 17 12:06:22.094041 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:06:22.094055 zram_generator::config[1057]: No configuration found. Jan 17 12:06:22.094074 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:06:22.094086 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:06:22.094101 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:06:22.094119 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:06:22.094132 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:06:22.094145 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:06:22.094157 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:06:22.094169 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:06:22.094182 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:06:22.094195 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:06:22.094207 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:06:22.094219 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:06:22.094234 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:22.094248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:22.094264 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:06:22.094277 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:06:22.094289 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:06:22.094302 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:06:22.094314 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:06:22.094327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:22.094340 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:06:22.094358 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:06:22.094374 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:06:22.094391 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:06:22.094406 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:22.094418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:06:22.094430 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:06:22.094442 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:06:22.094466 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:06:22.094481 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:06:22.094494 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:22.094506 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:22.094522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:22.094534 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:06:22.094548 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:06:22.094560 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:06:22.094572 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:06:22.094588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:22.094600 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:06:22.094612 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:06:22.094624 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:06:22.094644 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:06:22.094656 systemd[1]: Reached target machines.target - Containers. Jan 17 12:06:22.094668 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:06:22.094680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:06:22.094692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:06:22.094709 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:06:22.094721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:06:22.094734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:06:22.094746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:06:22.094758 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:06:22.094770 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:06:22.094783 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:06:22.094838 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:06:22.094865 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:06:22.094879 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:06:22.094891 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:06:22.094903 kernel: loop: module loaded Jan 17 12:06:22.094915 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:06:22.094927 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:06:22.094940 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:06:22.094951 kernel: fuse: init (API version 7.39) Jan 17 12:06:22.094963 kernel: ACPI: bus type drm_connector registered Jan 17 12:06:22.094981 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:06:22.094993 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:06:22.095007 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:06:22.095019 systemd[1]: Stopped verity-setup.service. Jan 17 12:06:22.095051 systemd-journald[1131]: Collecting audit messages is disabled. Jan 17 12:06:22.095076 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:22.095090 systemd-journald[1131]: Journal started Jan 17 12:06:22.095118 systemd-journald[1131]: Runtime Journal (/run/log/journal/5550f2455bd946209559d76b32497c7c) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:06:21.828396 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:06:21.845539 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:06:21.846147 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:06:22.096815 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:06:22.098703 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:06:22.100026 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:06:22.101351 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:06:22.102613 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:06:22.103925 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:06:22.105237 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:06:22.106694 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:06:22.108293 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:22.110026 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:06:22.110243 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:06:22.111930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:06:22.112140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:06:22.113913 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:06:22.114121 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:06:22.115848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:06:22.116054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:06:22.117880 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:06:22.118083 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:06:22.119747 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:06:22.120012 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:06:22.121964 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:22.123545 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:06:22.125153 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:06:22.144748 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:06:22.156931 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:06:22.159888 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:06:22.161129 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:06:22.161162 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:06:22.163497 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:06:22.166264 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:06:22.171414 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:06:22.172972 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:22.177092 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:06:22.180328 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:06:22.182002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:06:22.186563 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:06:22.187927 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:06:22.199066 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:06:22.201985 systemd-journald[1131]: Time spent on flushing to /var/log/journal/5550f2455bd946209559d76b32497c7c is 20.850ms for 991 entries. Jan 17 12:06:22.201985 systemd-journald[1131]: System Journal (/var/log/journal/5550f2455bd946209559d76b32497c7c) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:06:22.267507 systemd-journald[1131]: Received client request to flush runtime journal. Jan 17 12:06:22.267584 kernel: loop0: detected capacity change from 0 to 205544 Jan 17 12:06:22.267688 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:06:22.206163 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:06:22.223664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:06:22.228161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:22.230375 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:06:22.255977 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:06:22.258247 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:06:22.260503 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:06:22.273175 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:06:22.278075 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:06:22.287859 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:06:22.292872 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 17 12:06:22.292897 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 17 12:06:22.296982 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:06:22.300219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:22.385219 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:06:22.390197 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:22.390839 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:06:22.400128 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:06:22.402853 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:06:22.403997 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:06:22.439415 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:06:22.446830 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 12:06:22.448184 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:06:22.484351 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 17 12:06:22.484383 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 17 12:06:22.491590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:22.492824 kernel: loop3: detected capacity change from 0 to 205544 Jan 17 12:06:22.508825 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:06:22.535849 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 12:06:22.547819 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:06:22.548622 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 17 12:06:22.554778 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:06:22.554819 systemd[1]: Reloading... Jan 17 12:06:22.702842 zram_generator::config[1224]: No configuration found. Jan 17 12:06:22.955525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:23.042467 systemd[1]: Reloading finished in 486 ms. Jan 17 12:06:23.067331 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:06:23.106344 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:06:23.108635 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:06:23.128089 systemd[1]: Starting ensure-sysext.service... Jan 17 12:06:23.130760 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:06:23.143906 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:06:23.143928 systemd[1]: Reloading... Jan 17 12:06:23.205366 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:06:23.205987 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:06:23.207319 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:06:23.207772 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 17 12:06:23.207903 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 17 12:06:23.218147 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:06:23.218157 systemd-tmpfiles[1265]: Skipping /boot Jan 17 12:06:23.242821 zram_generator::config[1291]: No configuration found. Jan 17 12:06:23.262222 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:06:23.262244 systemd-tmpfiles[1265]: Skipping /boot Jan 17 12:06:23.432581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:23.489406 systemd[1]: Reloading finished in 345 ms. Jan 17 12:06:23.511480 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:06:23.523326 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:23.531127 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:06:23.533898 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:06:23.536373 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:06:23.541741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:06:23.547215 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:23.550411 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:06:23.557568 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:06:23.560585 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:23.560766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:06:23.565169 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:06:23.570591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:06:23.577069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:06:23.578636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:23.578760 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:23.583542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:23.583764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:06:23.584004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:23.584150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:23.587504 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:06:23.595887 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:06:23.600165 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:06:23.600926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:06:23.603450 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jan 17 12:06:23.607119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:06:23.607348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:06:23.611760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:06:23.612161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:06:23.615690 systemd[1]: Finished ensure-sysext.service. Jan 17 12:06:23.620478 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:23.621393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:06:23.623725 augenrules[1361]: No rules Jan 17 12:06:23.631097 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:06:23.632442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:23.632507 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:06:23.632577 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:06:23.637013 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:06:23.643271 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:06:23.645958 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:23.646512 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:06:23.648696 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:06:23.650275 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:23.652134 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:06:23.652330 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:06:23.654821 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:06:23.682953 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:06:23.684357 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:06:23.685099 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:06:23.815955 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1397) Jan 17 12:06:23.854281 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:06:23.922867 systemd-networkd[1392]: lo: Link UP Jan 17 12:06:23.922888 systemd-networkd[1392]: lo: Gained carrier Jan 17 12:06:23.925872 systemd-resolved[1334]: Positive Trust Anchors: Jan 17 12:06:23.925893 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:06:23.925927 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:06:23.928353 systemd-networkd[1392]: Enumeration completed Jan 17 12:06:23.928514 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:06:23.932106 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:23.932125 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:06:23.933039 systemd-resolved[1334]: Defaulting to hostname 'linux'. Jan 17 12:06:23.937191 systemd-networkd[1392]: eth0: Link UP Jan 17 12:06:23.937210 systemd-networkd[1392]: eth0: Gained carrier Jan 17 12:06:23.937239 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:23.944298 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:06:23.946847 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:06:23.946943 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:06:23.948566 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:06:23.951846 systemd[1]: Reached target network.target - Network. Jan 17 12:06:23.953810 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:06:23.953595 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:06:23.953816 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:23.955206 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:06:23.958863 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Jan 17 12:06:23.958919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:06:24.448919 systemd-timesyncd[1371]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:06:24.448971 systemd-timesyncd[1371]: Initial clock synchronization to Fri 2025-01-17 12:06:24.448785 UTC. Jan 17 12:06:24.449743 systemd-resolved[1334]: Clock change detected. Flushing caches. Jan 17 12:06:24.463129 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 12:06:24.463591 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:06:24.463817 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:06:24.464074 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:06:24.462345 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:06:24.497568 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:06:24.502942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:24.505891 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:06:24.604516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:24.604851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:24.609550 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:06:24.621767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:24.711031 kernel: kvm_amd: TSC scaling supported Jan 17 12:06:24.711155 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:06:24.711176 kernel: kvm_amd: Nested Paging enabled Jan 17 12:06:24.711194 kernel: kvm_amd: LBR virtualization supported Jan 17 12:06:24.712864 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:06:24.712894 kernel: kvm_amd: Virtual GIF supported Jan 17 12:06:24.738574 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:06:24.774294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:24.790478 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:06:24.808927 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:06:24.821183 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:06:24.862811 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:06:24.864614 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:24.865842 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:06:24.867140 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:06:24.868478 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:06:24.870359 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:06:24.871614 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:06:24.872922 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:06:24.874222 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:06:24.874260 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:06:24.875195 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:06:24.877099 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:06:24.880218 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:06:24.889683 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:06:24.892479 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:06:24.894165 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:06:24.895486 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:06:24.896576 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:06:24.897712 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:06:24.897754 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:06:24.909685 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:06:24.912217 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:06:24.914350 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:06:24.916742 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:06:24.918754 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:06:24.920511 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:06:24.921894 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:06:24.928023 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:06:24.933790 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:06:24.936668 jq[1434]: false Jan 17 12:06:24.942787 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:06:24.950366 extend-filesystems[1435]: Found loop3 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found loop4 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found loop5 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found sr0 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found vda Jan 17 12:06:24.950366 extend-filesystems[1435]: Found vda1 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found vda2 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found vda3 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found usr Jan 17 12:06:24.950366 extend-filesystems[1435]: Found vda4 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found vda6 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found vda7 Jan 17 12:06:24.950366 extend-filesystems[1435]: Found vda9 Jan 17 12:06:24.950366 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 17 12:06:24.967952 dbus-daemon[1433]: [system] SELinux support is enabled Jan 17 12:06:24.950650 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:06:24.951927 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:06:24.952859 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:06:24.992303 update_engine[1448]: I20250117 12:06:24.986917 1448 main.cc:92] Flatcar Update Engine starting Jan 17 12:06:24.955775 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:06:24.992686 jq[1451]: true Jan 17 12:06:24.962238 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:06:24.963946 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:06:24.968494 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:06:24.968752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:06:24.969094 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:06:24.969307 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:06:24.971109 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:06:24.975312 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:06:24.976333 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:06:24.988457 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:06:24.988489 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:06:24.990848 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:06:24.990868 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:06:24.997252 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 17 12:06:24.999210 update_engine[1448]: I20250117 12:06:24.999145 1448 update_check_scheduler.cc:74] Next update check in 3m54s Jan 17 12:06:25.000778 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:06:25.000878 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:06:25.005543 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:06:25.009051 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:06:25.009684 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:06:25.012995 tar[1454]: linux-amd64/helm Jan 17 12:06:25.031621 jq[1463]: true Jan 17 12:06:25.050761 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Jan 17 12:06:25.070729 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:06:25.076042 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:06:25.076042 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:06:25.076042 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:06:25.081344 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 17 12:06:25.076622 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:06:25.076980 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:06:25.103214 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:06:25.103260 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:06:25.113044 systemd-logind[1446]: New seat seat0. Jan 17 12:06:25.114669 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:06:25.129154 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:06:25.133798 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:06:25.137513 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:06:25.160174 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:06:25.295435 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:06:25.376346 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:06:25.388041 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:06:25.403196 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:06:25.404289 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:06:25.413871 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:06:25.467412 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:06:25.476459 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:06:25.479762 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:06:25.481239 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:06:25.621863 containerd[1461]: time="2025-01-17T12:06:25.621688981Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:06:25.748030 containerd[1461]: time="2025-01-17T12:06:25.747855900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:25.750253 containerd[1461]: time="2025-01-17T12:06:25.750197421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:25.750253 containerd[1461]: time="2025-01-17T12:06:25.750238067Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:06:25.750336 containerd[1461]: time="2025-01-17T12:06:25.750258515Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:06:25.750643 containerd[1461]: time="2025-01-17T12:06:25.750609053Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:06:25.750669 containerd[1461]: time="2025-01-17T12:06:25.750643928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:25.750780 containerd[1461]: time="2025-01-17T12:06:25.750747973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:25.750804 containerd[1461]: time="2025-01-17T12:06:25.750775936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:25.751094 containerd[1461]: time="2025-01-17T12:06:25.751061050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:25.751094 containerd[1461]: time="2025-01-17T12:06:25.751089474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:25.751146 containerd[1461]: time="2025-01-17T12:06:25.751116544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:25.751146 containerd[1461]: time="2025-01-17T12:06:25.751131112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:25.751310 containerd[1461]: time="2025-01-17T12:06:25.751280291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:25.751745 containerd[1461]: time="2025-01-17T12:06:25.751704818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:25.751919 containerd[1461]: time="2025-01-17T12:06:25.751881779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:25.751919 containerd[1461]: time="2025-01-17T12:06:25.751908930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:06:25.752086 containerd[1461]: time="2025-01-17T12:06:25.752060795Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:06:25.752191 containerd[1461]: time="2025-01-17T12:06:25.752167085Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:06:25.757683 containerd[1461]: time="2025-01-17T12:06:25.757643415Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:06:25.757732 containerd[1461]: time="2025-01-17T12:06:25.757702395Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:06:25.757762 containerd[1461]: time="2025-01-17T12:06:25.757725358Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:06:25.757762 containerd[1461]: time="2025-01-17T12:06:25.757750616Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:06:25.757835 containerd[1461]: time="2025-01-17T12:06:25.757768569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:06:25.757987 containerd[1461]: time="2025-01-17T12:06:25.757952775Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:06:25.758396 containerd[1461]: time="2025-01-17T12:06:25.758357594Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:06:25.758603 containerd[1461]: time="2025-01-17T12:06:25.758563791Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:06:25.758603 containerd[1461]: time="2025-01-17T12:06:25.758591693Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:06:25.758662 containerd[1461]: time="2025-01-17T12:06:25.758614646Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:06:25.758662 containerd[1461]: time="2025-01-17T12:06:25.758633842Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:06:25.758662 containerd[1461]: time="2025-01-17T12:06:25.758656254Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:06:25.758746 containerd[1461]: time="2025-01-17T12:06:25.758677273Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:06:25.758746 containerd[1461]: time="2025-01-17T12:06:25.758701689Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:06:25.758746 containerd[1461]: time="2025-01-17T12:06:25.758726386Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:06:25.758821 containerd[1461]: time="2025-01-17T12:06:25.758746734Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:06:25.758821 containerd[1461]: time="2025-01-17T12:06:25.758764758Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:06:25.758821 containerd[1461]: time="2025-01-17T12:06:25.758787721Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:06:25.758900 containerd[1461]: time="2025-01-17T12:06:25.758827986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.758900 containerd[1461]: time="2025-01-17T12:06:25.758863242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.758900 containerd[1461]: time="2025-01-17T12:06:25.758886416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.758985 containerd[1461]: time="2025-01-17T12:06:25.758907625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.758985 containerd[1461]: time="2025-01-17T12:06:25.758926050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.758985 containerd[1461]: time="2025-01-17T12:06:25.758944515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.758985 containerd[1461]: time="2025-01-17T12:06:25.758965454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759083 containerd[1461]: time="2025-01-17T12:06:25.758986423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759083 containerd[1461]: time="2025-01-17T12:06:25.759006090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759083 containerd[1461]: time="2025-01-17T12:06:25.759052728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759083 containerd[1461]: time="2025-01-17T12:06:25.759075140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759196 containerd[1461]: time="2025-01-17T12:06:25.759097311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759196 containerd[1461]: time="2025-01-17T12:06:25.759128440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759254 containerd[1461]: time="2025-01-17T12:06:25.759204462Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:06:25.759254 containerd[1461]: time="2025-01-17T12:06:25.759244497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759317 containerd[1461]: time="2025-01-17T12:06:25.759262972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759317 containerd[1461]: time="2025-01-17T12:06:25.759282559Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:06:25.759365 containerd[1461]: time="2025-01-17T12:06:25.759347190Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:06:25.759392 containerd[1461]: time="2025-01-17T12:06:25.759371555Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:06:25.759419 containerd[1461]: time="2025-01-17T12:06:25.759392505Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:06:25.759419 containerd[1461]: time="2025-01-17T12:06:25.759410589Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:06:25.759481 containerd[1461]: time="2025-01-17T12:06:25.759425106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.759481 containerd[1461]: time="2025-01-17T12:06:25.759448590Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:06:25.759481 containerd[1461]: time="2025-01-17T12:06:25.759478346Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:06:25.759571 containerd[1461]: time="2025-01-17T12:06:25.759493765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:06:25.763859 containerd[1461]: time="2025-01-17T12:06:25.763762821Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:06:25.764094 containerd[1461]: time="2025-01-17T12:06:25.763864261Z" level=info msg="Connect containerd service" Jan 17 12:06:25.764094 containerd[1461]: time="2025-01-17T12:06:25.763929403Z" level=info msg="using legacy CRI server" Jan 17 12:06:25.764094 containerd[1461]: time="2025-01-17T12:06:25.763942287Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:06:25.764161 containerd[1461]: time="2025-01-17T12:06:25.764097007Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:06:25.765073 containerd[1461]: time="2025-01-17T12:06:25.765043593Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:06:25.765393 containerd[1461]: time="2025-01-17T12:06:25.765330972Z" level=info msg="Start subscribing containerd event" Jan 17 12:06:25.765668 containerd[1461]: time="2025-01-17T12:06:25.765489178Z" level=info msg="Start recovering state" Jan 17 12:06:25.765764 containerd[1461]: time="2025-01-17T12:06:25.765731422Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:06:25.765828 containerd[1461]: time="2025-01-17T12:06:25.765740680Z" level=info msg="Start event monitor" Jan 17 12:06:25.765894 containerd[1461]: time="2025-01-17T12:06:25.765881053Z" level=info msg="Start snapshots syncer" Jan 17 12:06:25.765959 containerd[1461]: time="2025-01-17T12:06:25.765946095Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:06:25.766013 containerd[1461]: time="2025-01-17T12:06:25.766000818Z" level=info msg="Start streaming server" Jan 17 12:06:25.766142 containerd[1461]: time="2025-01-17T12:06:25.765979568Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:06:25.766379 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:06:25.768544 containerd[1461]: time="2025-01-17T12:06:25.766990183Z" level=info msg="containerd successfully booted in 0.147044s" Jan 17 12:06:25.784049 tar[1454]: linux-amd64/LICENSE Jan 17 12:06:25.784187 tar[1454]: linux-amd64/README.md Jan 17 12:06:25.808120 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:06:26.143839 systemd-networkd[1392]: eth0: Gained IPv6LL Jan 17 12:06:26.148516 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:06:26.150681 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:06:26.162003 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:06:26.165369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:26.168207 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:06:26.189423 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:06:26.189716 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:06:26.191478 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:06:26.193775 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:06:27.429281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:27.431291 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:06:27.433805 systemd[1]: Startup finished in 1.143s (kernel) + 6.535s (initrd) + 5.730s (userspace) = 13.410s. Jan 17 12:06:27.435588 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:06:27.956616 kubelet[1546]: E0117 12:06:27.956542 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:06:27.960801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:06:27.961078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:06:27.961501 systemd[1]: kubelet.service: Consumed 1.638s CPU time. Jan 17 12:06:28.638805 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:06:28.649202 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:32974.service - OpenSSH per-connection server daemon (10.0.0.1:32974). Jan 17 12:06:28.701235 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 32974 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:06:28.704136 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:28.715078 systemd-logind[1446]: New session 1 of user core. Jan 17 12:06:28.716769 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:06:28.725930 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:06:28.743195 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:06:28.756191 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:06:28.760129 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:06:28.891310 systemd[1564]: Queued start job for default target default.target. Jan 17 12:06:28.902820 systemd[1564]: Created slice app.slice - User Application Slice. Jan 17 12:06:28.902867 systemd[1564]: Reached target paths.target - Paths. Jan 17 12:06:28.902887 systemd[1564]: Reached target timers.target - Timers. Jan 17 12:06:28.905262 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:06:28.919031 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:06:28.919198 systemd[1564]: Reached target sockets.target - Sockets. Jan 17 12:06:28.919217 systemd[1564]: Reached target basic.target - Basic System. Jan 17 12:06:28.919271 systemd[1564]: Reached target default.target - Main User Target. Jan 17 12:06:28.919324 systemd[1564]: Startup finished in 150ms. Jan 17 12:06:28.919828 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:06:28.921633 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:06:28.985678 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:32988.service - OpenSSH per-connection server daemon (10.0.0.1:32988). Jan 17 12:06:29.046625 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 32988 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:06:29.048663 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:29.053310 systemd-logind[1446]: New session 2 of user core. Jan 17 12:06:29.059652 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:06:29.116589 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 17 12:06:29.127549 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:32988.service: Deactivated successfully. Jan 17 12:06:29.129687 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:06:29.131808 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:06:29.141860 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:32990.service - OpenSSH per-connection server daemon (10.0.0.1:32990). Jan 17 12:06:29.143120 systemd-logind[1446]: Removed session 2. Jan 17 12:06:29.178252 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 32990 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:06:29.179957 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:29.185043 systemd-logind[1446]: New session 3 of user core. Jan 17 12:06:29.194743 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:06:29.247999 sshd[1582]: pam_unix(sshd:session): session closed for user core Jan 17 12:06:29.257726 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:32990.service: Deactivated successfully. Jan 17 12:06:29.259839 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:06:29.261710 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:06:29.272990 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:32996.service - OpenSSH per-connection server daemon (10.0.0.1:32996). Jan 17 12:06:29.274226 systemd-logind[1446]: Removed session 3. Jan 17 12:06:29.307497 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 32996 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:06:29.309306 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:29.313755 systemd-logind[1446]: New session 4 of user core. Jan 17 12:06:29.331767 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:06:29.388672 sshd[1589]: pam_unix(sshd:session): session closed for user core Jan 17 12:06:29.401428 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:32996.service: Deactivated successfully. Jan 17 12:06:29.403439 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:06:29.404938 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:06:29.414848 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:32998.service - OpenSSH per-connection server daemon (10.0.0.1:32998). Jan 17 12:06:29.415918 systemd-logind[1446]: Removed session 4. Jan 17 12:06:29.450476 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 32998 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:06:29.452299 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:29.456463 systemd-logind[1446]: New session 5 of user core. Jan 17 12:06:29.465695 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:06:29.527410 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:06:29.527860 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:06:29.548202 sudo[1599]: pam_unix(sudo:session): session closed for user root Jan 17 12:06:29.551186 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 17 12:06:29.565957 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:32998.service: Deactivated successfully. Jan 17 12:06:29.568785 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:06:29.571435 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:06:29.581363 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:33002.service - OpenSSH per-connection server daemon (10.0.0.1:33002). Jan 17 12:06:29.582638 systemd-logind[1446]: Removed session 5. Jan 17 12:06:29.620870 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 33002 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:06:29.623295 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:29.628858 systemd-logind[1446]: New session 6 of user core. Jan 17 12:06:29.642852 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:06:29.700493 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:06:29.700984 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:06:29.706204 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 17 12:06:29.713856 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:06:29.714297 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:06:29.735862 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:06:29.737767 auditctl[1611]: No rules Jan 17 12:06:29.738313 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:06:29.738643 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:06:29.742630 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:06:29.790223 augenrules[1629]: No rules Jan 17 12:06:29.792763 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:06:29.794176 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 17 12:06:29.796301 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 17 12:06:29.806851 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:33002.service: Deactivated successfully. Jan 17 12:06:29.808876 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:06:29.810906 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:06:29.821826 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:33006.service - OpenSSH per-connection server daemon (10.0.0.1:33006). Jan 17 12:06:29.823111 systemd-logind[1446]: Removed session 6. Jan 17 12:06:29.861510 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 33006 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:06:29.863798 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:29.869544 systemd-logind[1446]: New session 7 of user core. Jan 17 12:06:29.879917 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:06:29.936914 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:06:29.937291 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:06:30.597916 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:06:30.598857 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:06:31.415726 dockerd[1658]: time="2025-01-17T12:06:31.415634878Z" level=info msg="Starting up" Jan 17 12:06:31.927650 dockerd[1658]: time="2025-01-17T12:06:31.927567344Z" level=info msg="Loading containers: start." Jan 17 12:06:32.055558 kernel: Initializing XFRM netlink socket Jan 17 12:06:32.159969 systemd-networkd[1392]: docker0: Link UP Jan 17 12:06:32.184439 dockerd[1658]: time="2025-01-17T12:06:32.184299018Z" level=info msg="Loading containers: done." Jan 17 12:06:32.208277 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2159489830-merged.mount: Deactivated successfully. Jan 17 12:06:32.209102 dockerd[1658]: time="2025-01-17T12:06:32.209036630Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:06:32.209224 dockerd[1658]: time="2025-01-17T12:06:32.209199746Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:06:32.209399 dockerd[1658]: time="2025-01-17T12:06:32.209371969Z" level=info msg="Daemon has completed initialization" Jan 17 12:06:32.255300 dockerd[1658]: time="2025-01-17T12:06:32.255170818Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:06:32.255469 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:06:33.674776 containerd[1461]: time="2025-01-17T12:06:33.674724718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 17 12:06:34.421561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002095615.mount: Deactivated successfully. Jan 17 12:06:35.980552 containerd[1461]: time="2025-01-17T12:06:35.980462730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:35.981277 containerd[1461]: time="2025-01-17T12:06:35.981225941Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 17 12:06:35.982695 containerd[1461]: time="2025-01-17T12:06:35.982654150Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:35.986309 containerd[1461]: time="2025-01-17T12:06:35.986261445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:35.987701 containerd[1461]: time="2025-01-17T12:06:35.987650389Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.312878954s" Jan 17 12:06:35.987701 containerd[1461]: time="2025-01-17T12:06:35.987693330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 17 12:06:35.990381 containerd[1461]: time="2025-01-17T12:06:35.990339051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 17 12:06:38.057624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:06:38.071883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:38.096647 containerd[1461]: time="2025-01-17T12:06:38.096449029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:38.127892 containerd[1461]: time="2025-01-17T12:06:38.127777552Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 17 12:06:38.129488 containerd[1461]: time="2025-01-17T12:06:38.129437805Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:38.133614 containerd[1461]: time="2025-01-17T12:06:38.133543956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:38.134916 containerd[1461]: time="2025-01-17T12:06:38.134850736Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.144472331s" Jan 17 12:06:38.135010 containerd[1461]: time="2025-01-17T12:06:38.134912532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 17 12:06:38.135925 containerd[1461]: time="2025-01-17T12:06:38.135663721Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 17 12:06:38.298004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:38.303676 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:06:38.346803 kubelet[1873]: E0117 12:06:38.346622 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:06:38.353680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:06:38.353913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:06:39.790948 containerd[1461]: time="2025-01-17T12:06:39.790864012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:39.792321 containerd[1461]: time="2025-01-17T12:06:39.791650928Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 17 12:06:39.793187 containerd[1461]: time="2025-01-17T12:06:39.793132637Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:39.796780 containerd[1461]: time="2025-01-17T12:06:39.796722920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:39.797923 containerd[1461]: time="2025-01-17T12:06:39.797876183Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.662155595s" Jan 17 12:06:39.797923 containerd[1461]: time="2025-01-17T12:06:39.797919554Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 17 12:06:39.798517 containerd[1461]: time="2025-01-17T12:06:39.798475727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 12:06:41.051388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782818704.mount: Deactivated successfully. Jan 17 12:06:41.697743 containerd[1461]: time="2025-01-17T12:06:41.697639758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:41.698699 containerd[1461]: time="2025-01-17T12:06:41.698607963Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 17 12:06:41.699832 containerd[1461]: time="2025-01-17T12:06:41.699786644Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:41.702032 containerd[1461]: time="2025-01-17T12:06:41.701989114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:41.702624 containerd[1461]: time="2025-01-17T12:06:41.702579471Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.904066444s" Jan 17 12:06:41.702686 containerd[1461]: time="2025-01-17T12:06:41.702624045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 17 12:06:41.703223 containerd[1461]: time="2025-01-17T12:06:41.703199965Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:06:42.259959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359070155.mount: Deactivated successfully. Jan 17 12:06:43.496992 containerd[1461]: time="2025-01-17T12:06:43.496925040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:43.499455 containerd[1461]: time="2025-01-17T12:06:43.499382178Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:06:43.500982 containerd[1461]: time="2025-01-17T12:06:43.500941221Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:43.504202 containerd[1461]: time="2025-01-17T12:06:43.504158555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:43.505384 containerd[1461]: time="2025-01-17T12:06:43.505323350Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.802091324s" Jan 17 12:06:43.505473 containerd[1461]: time="2025-01-17T12:06:43.505383853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:06:43.505948 containerd[1461]: time="2025-01-17T12:06:43.505897657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 12:06:44.044652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864157543.mount: Deactivated successfully. Jan 17 12:06:44.052968 containerd[1461]: time="2025-01-17T12:06:44.052899180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:44.053902 containerd[1461]: time="2025-01-17T12:06:44.053820467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 12:06:44.055691 containerd[1461]: time="2025-01-17T12:06:44.055638858Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:44.058231 containerd[1461]: time="2025-01-17T12:06:44.058163773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:44.059078 containerd[1461]: time="2025-01-17T12:06:44.059018906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 553.086735ms" Jan 17 12:06:44.059078 containerd[1461]: time="2025-01-17T12:06:44.059065173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 12:06:44.059820 containerd[1461]: time="2025-01-17T12:06:44.059709672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 17 12:06:44.595925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396278656.mount: Deactivated successfully. Jan 17 12:06:46.904470 containerd[1461]: time="2025-01-17T12:06:46.904339729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:46.905211 containerd[1461]: time="2025-01-17T12:06:46.905082441Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 17 12:06:46.906821 containerd[1461]: time="2025-01-17T12:06:46.906781347Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:46.910457 containerd[1461]: time="2025-01-17T12:06:46.910414882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:06:46.912380 containerd[1461]: time="2025-01-17T12:06:46.912319383Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.852571399s" Jan 17 12:06:46.912432 containerd[1461]: time="2025-01-17T12:06:46.912381269Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 17 12:06:48.557348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:06:48.567718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:48.719830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:48.724602 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:06:48.764990 kubelet[2027]: E0117 12:06:48.764906 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:06:48.769748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:06:48.770001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:06:49.928861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:49.944821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:49.977569 systemd[1]: Reloading requested from client PID 2043 ('systemctl') (unit session-7.scope)... Jan 17 12:06:49.977588 systemd[1]: Reloading... Jan 17 12:06:50.093584 zram_generator::config[2085]: No configuration found. Jan 17 12:06:50.449476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:50.545238 systemd[1]: Reloading finished in 567 ms. Jan 17 12:06:50.597640 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:06:50.597741 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:06:50.598041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:50.601491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:50.790988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:50.807307 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:06:50.888372 kubelet[2131]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:06:50.888372 kubelet[2131]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:06:50.888372 kubelet[2131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:06:50.901064 kubelet[2131]: I0117 12:06:50.900981 2131 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:06:51.287756 kubelet[2131]: I0117 12:06:51.287674 2131 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:06:51.287756 kubelet[2131]: I0117 12:06:51.287729 2131 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:06:51.288219 kubelet[2131]: I0117 12:06:51.288193 2131 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:06:51.328665 kubelet[2131]: I0117 12:06:51.328579 2131 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:06:51.334422 kubelet[2131]: E0117 12:06:51.334333 2131 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:51.354116 kubelet[2131]: E0117 12:06:51.354039 2131 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:06:51.354116 kubelet[2131]: I0117 12:06:51.354108 2131 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:06:51.363978 kubelet[2131]: I0117 12:06:51.363927 2131 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:06:51.371107 kubelet[2131]: I0117 12:06:51.371050 2131 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:06:51.371380 kubelet[2131]: I0117 12:06:51.371307 2131 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:06:51.371660 kubelet[2131]: I0117 12:06:51.371365 2131 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:06:51.371802 kubelet[2131]: I0117 12:06:51.371667 2131 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:06:51.371802 kubelet[2131]: I0117 12:06:51.371682 2131 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:06:51.371891 kubelet[2131]: I0117 12:06:51.371867 2131 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:06:51.379729 kubelet[2131]: I0117 12:06:51.379674 2131 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:06:51.379787 kubelet[2131]: I0117 12:06:51.379737 2131 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:06:51.379813 kubelet[2131]: I0117 12:06:51.379801 2131 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:06:51.379837 kubelet[2131]: I0117 12:06:51.379822 2131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:06:51.380893 kubelet[2131]: W0117 12:06:51.380771 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:51.380893 kubelet[2131]: W0117 12:06:51.380791 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:51.380893 kubelet[2131]: E0117 12:06:51.380860 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:51.380995 kubelet[2131]: E0117 12:06:51.380874 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:51.389107 kubelet[2131]: I0117 12:06:51.389056 2131 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:06:51.391202 kubelet[2131]: I0117 12:06:51.391157 2131 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:06:51.392325 kubelet[2131]: W0117 12:06:51.392296 2131 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:06:51.393852 kubelet[2131]: I0117 12:06:51.393814 2131 server.go:1269] "Started kubelet" Jan 17 12:06:51.394559 kubelet[2131]: I0117 12:06:51.393941 2131 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:06:51.394559 kubelet[2131]: I0117 12:06:51.394222 2131 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:06:51.394770 kubelet[2131]: I0117 12:06:51.394745 2131 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:06:51.395674 kubelet[2131]: I0117 12:06:51.395039 2131 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:06:51.397006 kubelet[2131]: I0117 12:06:51.396973 2131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:06:51.397199 kubelet[2131]: I0117 12:06:51.397166 2131 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:06:51.397374 kubelet[2131]: E0117 12:06:51.397341 2131 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:06:51.397983 kubelet[2131]: E0117 12:06:51.397946 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:51.397983 kubelet[2131]: I0117 12:06:51.397979 2131 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:06:51.398272 kubelet[2131]: I0117 12:06:51.398252 2131 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:06:51.398356 kubelet[2131]: I0117 12:06:51.398338 2131 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:06:51.398937 kubelet[2131]: W0117 12:06:51.398866 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:51.398937 kubelet[2131]: E0117 12:06:51.398929 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:51.399152 kubelet[2131]: E0117 12:06:51.399122 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Jan 17 12:06:51.399722 kubelet[2131]: I0117 12:06:51.399697 2131 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:06:51.400963 kubelet[2131]: I0117 12:06:51.400941 2131 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:06:51.400963 kubelet[2131]: I0117 12:06:51.400960 2131 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:06:51.409738 kubelet[2131]: E0117 12:06:51.407161 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b7971e6e950b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:06:51.39377375 +0000 UTC m=+0.581848144,LastTimestamp:2025-01-17 12:06:51.39377375 +0000 UTC m=+0.581848144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:06:51.422180 kubelet[2131]: I0117 12:06:51.422142 2131 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:06:51.422180 kubelet[2131]: I0117 12:06:51.422163 2131 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:06:51.422180 kubelet[2131]: I0117 12:06:51.422184 2131 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:06:51.426974 kubelet[2131]: I0117 12:06:51.426877 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:06:51.429067 kubelet[2131]: I0117 12:06:51.429021 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:06:51.429209 kubelet[2131]: I0117 12:06:51.429114 2131 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:06:51.429209 kubelet[2131]: I0117 12:06:51.429155 2131 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:06:51.429278 kubelet[2131]: E0117 12:06:51.429224 2131 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:06:51.432432 kubelet[2131]: W0117 12:06:51.430802 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:51.432432 kubelet[2131]: E0117 12:06:51.430858 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:51.498693 kubelet[2131]: E0117 12:06:51.498632 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:51.530203 kubelet[2131]: E0117 12:06:51.530112 2131 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:06:51.599485 kubelet[2131]: E0117 12:06:51.599411 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:51.600076 kubelet[2131]: E0117 12:06:51.600018 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Jan 17 12:06:51.700422 kubelet[2131]: E0117 12:06:51.700334 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:51.730668 kubelet[2131]: E0117 12:06:51.730589 2131 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:06:51.801184 kubelet[2131]: E0117 12:06:51.801110 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:51.901604 kubelet[2131]: E0117 12:06:51.901243 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.001293 kubelet[2131]: E0117 12:06:52.001214 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Jan 17 12:06:52.002255 kubelet[2131]: E0117 12:06:52.002217 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.102835 kubelet[2131]: E0117 12:06:52.102773 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.131059 kubelet[2131]: E0117 12:06:52.130996 2131 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:06:52.203750 kubelet[2131]: E0117 12:06:52.203562 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.304463 kubelet[2131]: E0117 12:06:52.304401 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.322217 kubelet[2131]: W0117 12:06:52.322094 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:52.322400 kubelet[2131]: E0117 12:06:52.322226 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:52.361345 kubelet[2131]: W0117 12:06:52.361246 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:52.361345 kubelet[2131]: E0117 12:06:52.361343 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:52.405126 kubelet[2131]: E0117 12:06:52.405065 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.506045 kubelet[2131]: E0117 12:06:52.505894 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.536630 kubelet[2131]: W0117 12:06:52.536559 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:52.536688 kubelet[2131]: E0117 12:06:52.536644 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:52.547510 kubelet[2131]: W0117 12:06:52.547481 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:52.547596 kubelet[2131]: E0117 12:06:52.547507 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:52.606389 kubelet[2131]: E0117 12:06:52.606301 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.707060 kubelet[2131]: E0117 12:06:52.706985 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.802161 kubelet[2131]: E0117 12:06:52.801985 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Jan 17 12:06:52.808167 kubelet[2131]: E0117 12:06:52.808128 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.908466 kubelet[2131]: E0117 12:06:52.908384 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:52.931664 kubelet[2131]: E0117 12:06:52.931604 2131 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:06:53.009440 kubelet[2131]: E0117 12:06:53.009384 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.110151 kubelet[2131]: E0117 12:06:53.110091 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.210967 kubelet[2131]: E0117 12:06:53.210897 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.311856 kubelet[2131]: E0117 12:06:53.311797 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.349978 kubelet[2131]: E0117 12:06:53.349910 2131 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:53.412802 kubelet[2131]: E0117 12:06:53.412655 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.513906 kubelet[2131]: E0117 12:06:53.513794 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.614741 kubelet[2131]: E0117 12:06:53.614619 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.626792 kubelet[2131]: I0117 12:06:53.626709 2131 policy_none.go:49] "None policy: Start" Jan 17 12:06:53.627876 kubelet[2131]: I0117 12:06:53.627850 2131 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:06:53.627919 kubelet[2131]: I0117 12:06:53.627882 2131 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:06:53.714878 kubelet[2131]: E0117 12:06:53.714709 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.815475 kubelet[2131]: E0117 12:06:53.815420 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:53.915710 kubelet[2131]: E0117 12:06:53.915641 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.016352 kubelet[2131]: E0117 12:06:54.016210 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.116868 kubelet[2131]: E0117 12:06:54.116818 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.217455 kubelet[2131]: E0117 12:06:54.217399 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.318040 kubelet[2131]: E0117 12:06:54.317997 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.402662 kubelet[2131]: E0117 12:06:54.402601 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" Jan 17 12:06:54.418995 kubelet[2131]: E0117 12:06:54.418929 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.519925 kubelet[2131]: E0117 12:06:54.519856 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.532115 kubelet[2131]: E0117 12:06:54.532061 2131 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:06:54.540727 kubelet[2131]: W0117 12:06:54.540685 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:54.540772 kubelet[2131]: E0117 12:06:54.540732 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:54.611769 kubelet[2131]: W0117 12:06:54.611629 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:54.611769 kubelet[2131]: E0117 12:06:54.611695 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:54.614077 kubelet[2131]: W0117 12:06:54.614021 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:54.614127 kubelet[2131]: E0117 12:06:54.614082 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:54.620603 kubelet[2131]: E0117 12:06:54.620562 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.721378 kubelet[2131]: E0117 12:06:54.721316 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.821995 kubelet[2131]: E0117 12:06:54.821931 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.905160 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:06:54.920197 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:06:54.922831 kubelet[2131]: E0117 12:06:54.922787 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:06:54.923474 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:06:54.933500 kubelet[2131]: I0117 12:06:54.933466 2131 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:06:54.933780 kubelet[2131]: I0117 12:06:54.933752 2131 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:06:54.933899 kubelet[2131]: I0117 12:06:54.933773 2131 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:06:54.934144 kubelet[2131]: I0117 12:06:54.934116 2131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:06:54.935540 kubelet[2131]: E0117 12:06:54.935484 2131 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:06:55.036150 kubelet[2131]: I0117 12:06:55.036091 2131 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 12:06:55.036590 kubelet[2131]: E0117 12:06:55.036550 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 17 12:06:55.239135 kubelet[2131]: I0117 12:06:55.238969 2131 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 12:06:55.239439 kubelet[2131]: E0117 12:06:55.239395 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 17 12:06:55.518240 kubelet[2131]: W0117 12:06:55.518066 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:55.518240 kubelet[2131]: E0117 12:06:55.518141 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:55.641689 kubelet[2131]: I0117 12:06:55.641636 2131 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 12:06:55.642079 kubelet[2131]: E0117 12:06:55.642042 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 17 12:06:56.444301 kubelet[2131]: I0117 12:06:56.444236 2131 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 12:06:56.444823 kubelet[2131]: E0117 12:06:56.444774 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 17 12:06:57.425793 kubelet[2131]: E0117 12:06:57.425725 2131 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:57.594916 kubelet[2131]: E0117 12:06:57.594765 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b7971e6e950b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:06:51.39377375 +0000 UTC m=+0.581848144,LastTimestamp:2025-01-17 12:06:51.39377375 +0000 UTC m=+0.581848144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:06:57.603372 kubelet[2131]: E0117 12:06:57.603337 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="6.4s" Jan 17 12:06:57.741507 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 17 12:06:57.756485 systemd[1]: Created slice kubepods-burstable-podb89f1ca0de97b8b9e0252b23ea189155.slice - libcontainer container kubepods-burstable-podb89f1ca0de97b8b9e0252b23ea189155.slice. Jan 17 12:06:57.760116 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 17 12:06:57.838810 kubelet[2131]: I0117 12:06:57.838733 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:06:57.838810 kubelet[2131]: I0117 12:06:57.838785 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b89f1ca0de97b8b9e0252b23ea189155-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b89f1ca0de97b8b9e0252b23ea189155\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:06:57.838810 kubelet[2131]: I0117 12:06:57.838804 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b89f1ca0de97b8b9e0252b23ea189155-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b89f1ca0de97b8b9e0252b23ea189155\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:06:57.838810 kubelet[2131]: I0117 12:06:57.838825 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:06:57.839126 kubelet[2131]: I0117 12:06:57.838858 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:06:57.839126 kubelet[2131]: I0117 12:06:57.838902 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:06:57.839126 kubelet[2131]: I0117 12:06:57.838928 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:06:57.839126 kubelet[2131]: I0117 12:06:57.838950 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b89f1ca0de97b8b9e0252b23ea189155-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b89f1ca0de97b8b9e0252b23ea189155\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:06:57.839126 kubelet[2131]: I0117 12:06:57.838972 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:06:58.047772 kubelet[2131]: I0117 12:06:58.047572 2131 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 12:06:58.048196 kubelet[2131]: E0117 12:06:58.048142 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 17 12:06:58.054673 kubelet[2131]: E0117 12:06:58.054631 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:06:58.055584 containerd[1461]: time="2025-01-17T12:06:58.055505389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 17 12:06:58.059750 kubelet[2131]: E0117 12:06:58.059697 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:06:58.060395 containerd[1461]: time="2025-01-17T12:06:58.060340981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b89f1ca0de97b8b9e0252b23ea189155,Namespace:kube-system,Attempt:0,}" Jan 17 12:06:58.062649 kubelet[2131]: E0117 12:06:58.062628 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:06:58.063046 containerd[1461]: time="2025-01-17T12:06:58.063013280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 17 12:06:58.582791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389736034.mount: Deactivated successfully. Jan 17 12:06:58.589095 containerd[1461]: time="2025-01-17T12:06:58.589038323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:06:58.592258 containerd[1461]: time="2025-01-17T12:06:58.592210041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:06:58.593079 containerd[1461]: time="2025-01-17T12:06:58.593044261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:06:58.593980 containerd[1461]: time="2025-01-17T12:06:58.593947074Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:06:58.595290 containerd[1461]: time="2025-01-17T12:06:58.595235277Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:06:58.596256 containerd[1461]: time="2025-01-17T12:06:58.596218925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:06:58.597120 containerd[1461]: time="2025-01-17T12:06:58.597062304Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:06:58.598733 containerd[1461]: time="2025-01-17T12:06:58.598697172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:06:58.601041 containerd[1461]: time="2025-01-17T12:06:58.601010132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.925365ms" Jan 17 12:06:58.602465 containerd[1461]: time="2025-01-17T12:06:58.602431228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.997359ms" Jan 17 12:06:58.611550 containerd[1461]: time="2025-01-17T12:06:58.611451251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.819881ms" Jan 17 12:06:58.773593 containerd[1461]: time="2025-01-17T12:06:58.773444495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773515562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773634620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773577781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773664237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773681710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773789577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773445487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773512275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773562952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:06:58.773895 containerd[1461]: time="2025-01-17T12:06:58.773661111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:06:58.775263 containerd[1461]: time="2025-01-17T12:06:58.775088089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:06:58.808752 systemd[1]: Started cri-containerd-974f277b01c506ebcb7d627fdf575b64f53dc65739ff6abf4a55e9210d62a9a6.scope - libcontainer container 974f277b01c506ebcb7d627fdf575b64f53dc65739ff6abf4a55e9210d62a9a6. Jan 17 12:06:58.813242 systemd[1]: Started cri-containerd-16ca94dfd26cfd071eb6e04df5aa845935053b81d429f14d87a06e13bf47f298.scope - libcontainer container 16ca94dfd26cfd071eb6e04df5aa845935053b81d429f14d87a06e13bf47f298. Jan 17 12:06:58.815476 systemd[1]: Started cri-containerd-3dbd36a42b71a20593955d13d37b14cde63aebf7bb27b943f548c87011383810.scope - libcontainer container 3dbd36a42b71a20593955d13d37b14cde63aebf7bb27b943f548c87011383810. Jan 17 12:06:58.852020 kubelet[2131]: W0117 12:06:58.851805 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 17 12:06:58.852020 kubelet[2131]: E0117 12:06:58.851895 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:06:58.870378 containerd[1461]: time="2025-01-17T12:06:58.869881924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"974f277b01c506ebcb7d627fdf575b64f53dc65739ff6abf4a55e9210d62a9a6\"" Jan 17 12:06:58.871091 kubelet[2131]: E0117 12:06:58.871066 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:06:58.873363 containerd[1461]: time="2025-01-17T12:06:58.873212285Z" level=info msg="CreateContainer within sandbox \"974f277b01c506ebcb7d627fdf575b64f53dc65739ff6abf4a55e9210d62a9a6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:06:58.876922 containerd[1461]: time="2025-01-17T12:06:58.876584147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b89f1ca0de97b8b9e0252b23ea189155,Namespace:kube-system,Attempt:0,} returns sandbox id \"16ca94dfd26cfd071eb6e04df5aa845935053b81d429f14d87a06e13bf47f298\"" Jan 17 12:06:58.879375 kubelet[2131]: E0117 12:06:58.879223 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:06:58.879769 containerd[1461]: time="2025-01-17T12:06:58.879702182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dbd36a42b71a20593955d13d37b14cde63aebf7bb27b943f548c87011383810\"" Jan 17 12:06:58.881277 kubelet[2131]: E0117 12:06:58.881222 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:06:58.882188 containerd[1461]: time="2025-01-17T12:06:58.882141724Z" level=info msg="CreateContainer within sandbox \"16ca94dfd26cfd071eb6e04df5aa845935053b81d429f14d87a06e13bf47f298\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:06:58.883443 containerd[1461]: time="2025-01-17T12:06:58.883025150Z" level=info msg="CreateContainer within sandbox \"3dbd36a42b71a20593955d13d37b14cde63aebf7bb27b943f548c87011383810\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:06:59.383006 containerd[1461]: time="2025-01-17T12:06:59.382924659Z" level=info msg="CreateContainer within sandbox \"974f277b01c506ebcb7d627fdf575b64f53dc65739ff6abf4a55e9210d62a9a6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f694946a86aeb19ecd10ac45b515fd0775b8b4a7a0762eea90f599895d578e7a\"" Jan 17 12:06:59.383905 containerd[1461]: time="2025-01-17T12:06:59.383877925Z" level=info msg="StartContainer for \"f694946a86aeb19ecd10ac45b515fd0775b8b4a7a0762eea90f599895d578e7a\"" Jan 17 12:06:59.384215 containerd[1461]: time="2025-01-17T12:06:59.384172229Z" level=info msg="CreateContainer within sandbox \"3dbd36a42b71a20593955d13d37b14cde63aebf7bb27b943f548c87011383810\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3929daba47a83af960ca2ef0123296991d0fac6acc3f0f5cc1a332b225abd91b\"" Jan 17 12:06:59.384831 containerd[1461]: time="2025-01-17T12:06:59.384768803Z" level=info msg="StartContainer for \"3929daba47a83af960ca2ef0123296991d0fac6acc3f0f5cc1a332b225abd91b\"" Jan 17 12:06:59.386889 containerd[1461]: time="2025-01-17T12:06:59.386839723Z" level=info msg="CreateContainer within sandbox \"16ca94dfd26cfd071eb6e04df5aa845935053b81d429f14d87a06e13bf47f298\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0bb3a23fbf152ba8fd1eae1da8f8842af975bbc4fc824adeb6698a8e9a7c9b6c\"" Jan 17 12:06:59.387351 containerd[1461]: time="2025-01-17T12:06:59.387320413Z" level=info msg="StartContainer for \"0bb3a23fbf152ba8fd1eae1da8f8842af975bbc4fc824adeb6698a8e9a7c9b6c\"" Jan 17 12:06:59.474905 systemd[1]: Started cri-containerd-0bb3a23fbf152ba8fd1eae1da8f8842af975bbc4fc824adeb6698a8e9a7c9b6c.scope - libcontainer container 0bb3a23fbf152ba8fd1eae1da8f8842af975bbc4fc824adeb6698a8e9a7c9b6c. Jan 17 12:06:59.500677 systemd[1]: Started cri-containerd-3929daba47a83af960ca2ef0123296991d0fac6acc3f0f5cc1a332b225abd91b.scope - libcontainer container 3929daba47a83af960ca2ef0123296991d0fac6acc3f0f5cc1a332b225abd91b. Jan 17 12:06:59.502398 systemd[1]: Started cri-containerd-f694946a86aeb19ecd10ac45b515fd0775b8b4a7a0762eea90f599895d578e7a.scope - libcontainer container f694946a86aeb19ecd10ac45b515fd0775b8b4a7a0762eea90f599895d578e7a. Jan 17 12:06:59.531595 containerd[1461]: time="2025-01-17T12:06:59.531505741Z" level=info msg="StartContainer for \"0bb3a23fbf152ba8fd1eae1da8f8842af975bbc4fc824adeb6698a8e9a7c9b6c\" returns successfully" Jan 17 12:06:59.568400 containerd[1461]: time="2025-01-17T12:06:59.568343223Z" level=info msg="StartContainer for \"3929daba47a83af960ca2ef0123296991d0fac6acc3f0f5cc1a332b225abd91b\" returns successfully" Jan 17 12:06:59.570635 containerd[1461]: time="2025-01-17T12:06:59.568343253Z" level=info msg="StartContainer for \"f694946a86aeb19ecd10ac45b515fd0775b8b4a7a0762eea90f599895d578e7a\" returns successfully" Jan 17 12:07:00.487894 kubelet[2131]: E0117 12:07:00.487787 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:00.490046 kubelet[2131]: E0117 12:07:00.488830 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:00.497182 kubelet[2131]: E0117 12:07:00.497073 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:01.249936 kubelet[2131]: I0117 12:07:01.249886 2131 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 12:07:01.256095 kubelet[2131]: I0117 12:07:01.256066 2131 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 17 12:07:01.256095 kubelet[2131]: E0117 12:07:01.256098 2131 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 17 12:07:01.385723 kubelet[2131]: I0117 12:07:01.385684 2131 apiserver.go:52] "Watching apiserver" Jan 17 12:07:01.398699 kubelet[2131]: I0117 12:07:01.398655 2131 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:07:01.501908 kubelet[2131]: E0117 12:07:01.501760 2131 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 12:07:01.501908 kubelet[2131]: E0117 12:07:01.501766 2131 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 12:07:01.501908 kubelet[2131]: E0117 12:07:01.501776 2131 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 12:07:01.502447 kubelet[2131]: E0117 12:07:01.501919 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:01.502447 kubelet[2131]: E0117 12:07:01.501960 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:01.502447 kubelet[2131]: E0117 12:07:01.502006 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:02.508962 kubelet[2131]: E0117 12:07:02.508909 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:02.539383 kubelet[2131]: E0117 12:07:02.539301 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:03.498199 kubelet[2131]: E0117 12:07:03.498159 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:03.498402 kubelet[2131]: E0117 12:07:03.498162 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:03.546172 kubelet[2131]: E0117 12:07:03.546127 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:03.761610 systemd[1]: Reloading requested from client PID 2413 ('systemctl') (unit session-7.scope)... Jan 17 12:07:03.761626 systemd[1]: Reloading... Jan 17 12:07:03.886626 zram_generator::config[2449]: No configuration found. Jan 17 12:07:04.008295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:07:04.109748 systemd[1]: Reloading finished in 347 ms. Jan 17 12:07:04.157685 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:04.176403 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:07:04.176823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:04.176887 systemd[1]: kubelet.service: Consumed 1.413s CPU time, 121.0M memory peak, 0B memory swap peak. Jan 17 12:07:04.188986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:04.370514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:04.376180 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:07:04.423863 kubelet[2497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:07:04.424410 kubelet[2497]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:07:04.424410 kubelet[2497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:07:04.424410 kubelet[2497]: I0117 12:07:04.424093 2497 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:07:04.432398 kubelet[2497]: I0117 12:07:04.432314 2497 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:07:04.432398 kubelet[2497]: I0117 12:07:04.432363 2497 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:07:04.432756 kubelet[2497]: I0117 12:07:04.432721 2497 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:07:04.433993 kubelet[2497]: I0117 12:07:04.433958 2497 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:07:04.435910 kubelet[2497]: I0117 12:07:04.435886 2497 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:07:04.443070 kubelet[2497]: E0117 12:07:04.443001 2497 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:07:04.443070 kubelet[2497]: I0117 12:07:04.443054 2497 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:07:04.447833 kubelet[2497]: I0117 12:07:04.447803 2497 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:07:04.447975 kubelet[2497]: I0117 12:07:04.447951 2497 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:07:04.448149 kubelet[2497]: I0117 12:07:04.448113 2497 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:07:04.448351 kubelet[2497]: I0117 12:07:04.448146 2497 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:07:04.448351 kubelet[2497]: I0117 12:07:04.448351 2497 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:07:04.448515 kubelet[2497]: I0117 12:07:04.448361 2497 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:07:04.448515 kubelet[2497]: I0117 12:07:04.448401 2497 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:07:04.448515 kubelet[2497]: I0117 12:07:04.448538 2497 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:07:04.448657 kubelet[2497]: I0117 12:07:04.448556 2497 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:07:04.448657 kubelet[2497]: I0117 12:07:04.448592 2497 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:07:04.448657 kubelet[2497]: I0117 12:07:04.448610 2497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:07:04.449359 kubelet[2497]: I0117 12:07:04.449310 2497 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:07:04.449819 kubelet[2497]: I0117 12:07:04.449783 2497 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:07:04.450410 kubelet[2497]: I0117 12:07:04.450377 2497 server.go:1269] "Started kubelet" Jan 17 12:07:04.455240 kubelet[2497]: I0117 12:07:04.452222 2497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:07:04.455240 kubelet[2497]: I0117 12:07:04.452653 2497 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:07:04.455240 kubelet[2497]: I0117 12:07:04.452710 2497 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:07:04.455240 kubelet[2497]: I0117 12:07:04.452941 2497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:07:04.455920 kubelet[2497]: I0117 12:07:04.455852 2497 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:07:04.459691 kubelet[2497]: I0117 12:07:04.459657 2497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:07:04.461266 kubelet[2497]: E0117 12:07:04.460650 2497 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:07:04.461402 kubelet[2497]: I0117 12:07:04.461283 2497 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:07:04.461813 kubelet[2497]: E0117 12:07:04.461765 2497 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:07:04.462780 kubelet[2497]: I0117 12:07:04.462753 2497 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:07:04.462928 kubelet[2497]: I0117 12:07:04.462900 2497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:07:04.463477 kubelet[2497]: I0117 12:07:04.463444 2497 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:07:04.464136 kubelet[2497]: I0117 12:07:04.464108 2497 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:07:04.467027 kubelet[2497]: I0117 12:07:04.466997 2497 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:07:04.476306 kubelet[2497]: I0117 12:07:04.476248 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:07:04.477927 kubelet[2497]: I0117 12:07:04.477899 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:07:04.477992 kubelet[2497]: I0117 12:07:04.477954 2497 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:07:04.477992 kubelet[2497]: I0117 12:07:04.477983 2497 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:07:04.478061 kubelet[2497]: E0117 12:07:04.478039 2497 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:07:04.506452 kubelet[2497]: I0117 12:07:04.506401 2497 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:07:04.506452 kubelet[2497]: I0117 12:07:04.506430 2497 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:07:04.506452 kubelet[2497]: I0117 12:07:04.506449 2497 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:07:04.506724 kubelet[2497]: I0117 12:07:04.506637 2497 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:07:04.506724 kubelet[2497]: I0117 12:07:04.506649 2497 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:07:04.506724 kubelet[2497]: I0117 12:07:04.506668 2497 policy_none.go:49] "None policy: Start" Jan 17 12:07:04.507562 kubelet[2497]: I0117 12:07:04.507247 2497 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:07:04.507562 kubelet[2497]: I0117 12:07:04.507289 2497 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:07:04.507562 kubelet[2497]: I0117 12:07:04.507501 2497 state_mem.go:75] "Updated machine memory state" Jan 17 12:07:04.541624 kubelet[2497]: I0117 12:07:04.541579 2497 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:07:04.541821 kubelet[2497]: I0117 12:07:04.541799 2497 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:07:04.541877 kubelet[2497]: I0117 12:07:04.541815 2497 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:07:04.542502 kubelet[2497]: I0117 12:07:04.542442 2497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:07:04.586293 kubelet[2497]: E0117 12:07:04.586238 2497 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:07:04.587558 kubelet[2497]: E0117 12:07:04.587515 2497 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 12:07:04.587639 kubelet[2497]: E0117 12:07:04.587516 2497 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 12:07:04.649063 kubelet[2497]: I0117 12:07:04.648894 2497 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 17 12:07:04.664857 kubelet[2497]: I0117 12:07:04.664745 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:07:04.664857 kubelet[2497]: I0117 12:07:04.664795 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b89f1ca0de97b8b9e0252b23ea189155-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b89f1ca0de97b8b9e0252b23ea189155\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:07:04.664857 kubelet[2497]: I0117 12:07:04.664818 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b89f1ca0de97b8b9e0252b23ea189155-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b89f1ca0de97b8b9e0252b23ea189155\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:07:04.664857 kubelet[2497]: I0117 12:07:04.664834 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:07:04.665142 kubelet[2497]: I0117 12:07:04.664897 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:07:04.665142 kubelet[2497]: I0117 12:07:04.664944 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:07:04.665142 kubelet[2497]: I0117 12:07:04.664975 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b89f1ca0de97b8b9e0252b23ea189155-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b89f1ca0de97b8b9e0252b23ea189155\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:07:04.665142 kubelet[2497]: I0117 12:07:04.664989 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:07:04.665142 kubelet[2497]: I0117 12:07:04.665004 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:07:04.713551 kubelet[2497]: I0117 12:07:04.713484 2497 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 17 12:07:04.713725 kubelet[2497]: I0117 12:07:04.713631 2497 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 17 12:07:04.887378 kubelet[2497]: E0117 12:07:04.887330 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:04.887720 kubelet[2497]: E0117 12:07:04.887694 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:04.887843 kubelet[2497]: E0117 12:07:04.887816 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:05.449279 kubelet[2497]: I0117 12:07:05.449213 2497 apiserver.go:52] "Watching apiserver" Jan 17 12:07:05.464493 kubelet[2497]: I0117 12:07:05.464444 2497 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:07:05.491813 kubelet[2497]: E0117 12:07:05.491765 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:05.491813 kubelet[2497]: E0117 12:07:05.491783 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:05.491981 kubelet[2497]: E0117 12:07:05.491928 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:05.589239 kubelet[2497]: I0117 12:07:05.589106 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.589086493 podStartE2EDuration="3.589086493s" podCreationTimestamp="2025-01-17 12:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:05.589070974 +0000 UTC m=+1.205363111" watchObservedRunningTime="2025-01-17 12:07:05.589086493 +0000 UTC m=+1.205378630" Jan 17 12:07:05.737347 kubelet[2497]: I0117 12:07:05.737152 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.736661844 podStartE2EDuration="2.736661844s" podCreationTimestamp="2025-01-17 12:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:05.736615284 +0000 UTC m=+1.352907421" watchObservedRunningTime="2025-01-17 12:07:05.736661844 +0000 UTC m=+1.352953991" Jan 17 12:07:05.811941 kubelet[2497]: I0117 12:07:05.811705 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.8116873140000003 podStartE2EDuration="3.811687314s" podCreationTimestamp="2025-01-17 12:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:05.811425615 +0000 UTC m=+1.427717752" watchObservedRunningTime="2025-01-17 12:07:05.811687314 +0000 UTC m=+1.427979451" Jan 17 12:07:06.496041 kubelet[2497]: E0117 12:07:06.495637 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:06.898500 kubelet[2497]: E0117 12:07:06.898439 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:08.526652 kubelet[2497]: E0117 12:07:08.526554 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:09.403030 kubelet[2497]: I0117 12:07:09.402997 2497 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:07:09.405583 containerd[1461]: time="2025-01-17T12:07:09.404824911Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:07:09.405959 kubelet[2497]: I0117 12:07:09.405001 2497 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:07:09.938413 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 17 12:07:09.953446 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:09.957754 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:33006.service: Deactivated successfully. Jan 17 12:07:09.959926 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:07:09.960140 systemd[1]: session-7.scope: Consumed 5.617s CPU time, 158.6M memory peak, 0B memory swap peak. Jan 17 12:07:09.960722 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:07:09.961601 systemd-logind[1446]: Removed session 7. Jan 17 12:07:10.024613 update_engine[1448]: I20250117 12:07:10.024508 1448 update_attempter.cc:509] Updating boot flags... Jan 17 12:07:10.068576 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2593) Jan 17 12:07:10.111616 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2595) Jan 17 12:07:10.154553 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2595) Jan 17 12:07:10.372976 systemd[1]: Created slice kubepods-besteffort-pod295d2b65_4263_489b_9e2b_a6eb3c322bee.slice - libcontainer container kubepods-besteffort-pod295d2b65_4263_489b_9e2b_a6eb3c322bee.slice. Jan 17 12:07:10.411200 systemd[1]: Created slice kubepods-besteffort-pod6be166c3_e77f_4ad5_8231_dffec2999d04.slice - libcontainer container kubepods-besteffort-pod6be166c3_e77f_4ad5_8231_dffec2999d04.slice. Jan 17 12:07:10.535851 kubelet[2497]: I0117 12:07:10.535776 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/295d2b65-4263-489b-9e2b-a6eb3c322bee-kube-proxy\") pod \"kube-proxy-6c9nx\" (UID: \"295d2b65-4263-489b-9e2b-a6eb3c322bee\") " pod="kube-system/kube-proxy-6c9nx" Jan 17 12:07:10.535851 kubelet[2497]: I0117 12:07:10.535843 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/295d2b65-4263-489b-9e2b-a6eb3c322bee-lib-modules\") pod \"kube-proxy-6c9nx\" (UID: \"295d2b65-4263-489b-9e2b-a6eb3c322bee\") " pod="kube-system/kube-proxy-6c9nx" Jan 17 12:07:10.536491 kubelet[2497]: I0117 12:07:10.535868 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jlwv\" (UniqueName: \"kubernetes.io/projected/6be166c3-e77f-4ad5-8231-dffec2999d04-kube-api-access-6jlwv\") pod \"tigera-operator-76c4976dd7-xsfsx\" (UID: \"6be166c3-e77f-4ad5-8231-dffec2999d04\") " pod="tigera-operator/tigera-operator-76c4976dd7-xsfsx" Jan 17 12:07:10.536491 kubelet[2497]: I0117 12:07:10.535892 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/295d2b65-4263-489b-9e2b-a6eb3c322bee-xtables-lock\") pod \"kube-proxy-6c9nx\" (UID: \"295d2b65-4263-489b-9e2b-a6eb3c322bee\") " pod="kube-system/kube-proxy-6c9nx" Jan 17 12:07:10.536491 kubelet[2497]: I0117 12:07:10.535970 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbb6k\" (UniqueName: \"kubernetes.io/projected/295d2b65-4263-489b-9e2b-a6eb3c322bee-kube-api-access-kbb6k\") pod \"kube-proxy-6c9nx\" (UID: \"295d2b65-4263-489b-9e2b-a6eb3c322bee\") " pod="kube-system/kube-proxy-6c9nx" Jan 17 12:07:10.536491 kubelet[2497]: I0117 12:07:10.536009 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6be166c3-e77f-4ad5-8231-dffec2999d04-var-lib-calico\") pod \"tigera-operator-76c4976dd7-xsfsx\" (UID: \"6be166c3-e77f-4ad5-8231-dffec2999d04\") " pod="tigera-operator/tigera-operator-76c4976dd7-xsfsx" Jan 17 12:07:10.685296 kubelet[2497]: E0117 12:07:10.685132 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:10.686186 containerd[1461]: time="2025-01-17T12:07:10.685980159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6c9nx,Uid:295d2b65-4263-489b-9e2b-a6eb3c322bee,Namespace:kube-system,Attempt:0,}" Jan 17 12:07:10.714978 containerd[1461]: time="2025-01-17T12:07:10.714923365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xsfsx,Uid:6be166c3-e77f-4ad5-8231-dffec2999d04,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:07:10.716884 containerd[1461]: time="2025-01-17T12:07:10.716787910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:10.716884 containerd[1461]: time="2025-01-17T12:07:10.716838265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:10.716884 containerd[1461]: time="2025-01-17T12:07:10.716851340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:10.717117 containerd[1461]: time="2025-01-17T12:07:10.716941521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:10.741684 systemd[1]: Started cri-containerd-ab5c276c62979b61534793633b862c9cf97087eeeab99ffa3a9d9edf0ff67b91.scope - libcontainer container ab5c276c62979b61534793633b862c9cf97087eeeab99ffa3a9d9edf0ff67b91. Jan 17 12:07:10.748320 containerd[1461]: time="2025-01-17T12:07:10.748099395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:10.748320 containerd[1461]: time="2025-01-17T12:07:10.748225414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:10.748320 containerd[1461]: time="2025-01-17T12:07:10.748264277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:10.748591 containerd[1461]: time="2025-01-17T12:07:10.748454608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:10.769718 systemd[1]: Started cri-containerd-70bb101a924ecbc18022d3791fd1e5e10c5b9769ea4d4f15197ebb8aef151f25.scope - libcontainer container 70bb101a924ecbc18022d3791fd1e5e10c5b9769ea4d4f15197ebb8aef151f25. Jan 17 12:07:10.772610 containerd[1461]: time="2025-01-17T12:07:10.772506532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6c9nx,Uid:295d2b65-4263-489b-9e2b-a6eb3c322bee,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab5c276c62979b61534793633b862c9cf97087eeeab99ffa3a9d9edf0ff67b91\"" Jan 17 12:07:10.774037 kubelet[2497]: E0117 12:07:10.774009 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:10.779295 containerd[1461]: time="2025-01-17T12:07:10.778264958Z" level=info msg="CreateContainer within sandbox \"ab5c276c62979b61534793633b862c9cf97087eeeab99ffa3a9d9edf0ff67b91\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:07:10.809809 containerd[1461]: time="2025-01-17T12:07:10.809663608Z" level=info msg="CreateContainer within sandbox \"ab5c276c62979b61534793633b862c9cf97087eeeab99ffa3a9d9edf0ff67b91\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da6b3705f7de550a3c94b9533e1aba911dd2dd66941dc12bbe558070ac85e75d\"" Jan 17 12:07:10.811062 containerd[1461]: time="2025-01-17T12:07:10.811041340Z" level=info msg="StartContainer for \"da6b3705f7de550a3c94b9533e1aba911dd2dd66941dc12bbe558070ac85e75d\"" Jan 17 12:07:10.815297 containerd[1461]: time="2025-01-17T12:07:10.815274174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xsfsx,Uid:6be166c3-e77f-4ad5-8231-dffec2999d04,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"70bb101a924ecbc18022d3791fd1e5e10c5b9769ea4d4f15197ebb8aef151f25\"" Jan 17 12:07:10.817188 containerd[1461]: time="2025-01-17T12:07:10.817155791Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:07:10.843725 systemd[1]: Started cri-containerd-da6b3705f7de550a3c94b9533e1aba911dd2dd66941dc12bbe558070ac85e75d.scope - libcontainer container da6b3705f7de550a3c94b9533e1aba911dd2dd66941dc12bbe558070ac85e75d. Jan 17 12:07:10.883114 containerd[1461]: time="2025-01-17T12:07:10.883062076Z" level=info msg="StartContainer for \"da6b3705f7de550a3c94b9533e1aba911dd2dd66941dc12bbe558070ac85e75d\" returns successfully" Jan 17 12:07:11.504403 kubelet[2497]: E0117 12:07:11.504370 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:11.525827 kubelet[2497]: I0117 12:07:11.525723 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6c9nx" podStartSLOduration=1.5256999599999999 podStartE2EDuration="1.52569996s" podCreationTimestamp="2025-01-17 12:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:11.52516675 +0000 UTC m=+7.141458877" watchObservedRunningTime="2025-01-17 12:07:11.52569996 +0000 UTC m=+7.141992097" Jan 17 12:07:13.231065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575423871.mount: Deactivated successfully. Jan 17 12:07:13.555609 containerd[1461]: time="2025-01-17T12:07:13.555420043Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:13.556254 containerd[1461]: time="2025-01-17T12:07:13.556183257Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764305" Jan 17 12:07:13.557539 containerd[1461]: time="2025-01-17T12:07:13.557472466Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:13.562806 containerd[1461]: time="2025-01-17T12:07:13.560659426Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:13.562806 containerd[1461]: time="2025-01-17T12:07:13.562243473Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.744797482s" Jan 17 12:07:13.562806 containerd[1461]: time="2025-01-17T12:07:13.562299549Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:07:13.566019 containerd[1461]: time="2025-01-17T12:07:13.565982298Z" level=info msg="CreateContainer within sandbox \"70bb101a924ecbc18022d3791fd1e5e10c5b9769ea4d4f15197ebb8aef151f25\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:07:13.578812 containerd[1461]: time="2025-01-17T12:07:13.578750686Z" level=info msg="CreateContainer within sandbox \"70bb101a924ecbc18022d3791fd1e5e10c5b9769ea4d4f15197ebb8aef151f25\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bc201965336656b7ca45e6002cce9593468a3bcbf9fb1ca255ac7f3069269219\"" Jan 17 12:07:13.579381 containerd[1461]: time="2025-01-17T12:07:13.579317338Z" level=info msg="StartContainer for \"bc201965336656b7ca45e6002cce9593468a3bcbf9fb1ca255ac7f3069269219\"" Jan 17 12:07:13.615706 systemd[1]: Started cri-containerd-bc201965336656b7ca45e6002cce9593468a3bcbf9fb1ca255ac7f3069269219.scope - libcontainer container bc201965336656b7ca45e6002cce9593468a3bcbf9fb1ca255ac7f3069269219. Jan 17 12:07:13.809656 kubelet[2497]: E0117 12:07:13.809592 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:13.816375 containerd[1461]: time="2025-01-17T12:07:13.816307300Z" level=info msg="StartContainer for \"bc201965336656b7ca45e6002cce9593468a3bcbf9fb1ca255ac7f3069269219\" returns successfully" Jan 17 12:07:14.511397 kubelet[2497]: E0117 12:07:14.511271 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:14.711773 kubelet[2497]: I0117 12:07:14.711684 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-xsfsx" podStartSLOduration=1.964028691 podStartE2EDuration="4.711662353s" podCreationTimestamp="2025-01-17 12:07:10 +0000 UTC" firstStartedPulling="2025-01-17 12:07:10.816609004 +0000 UTC m=+6.432901141" lastFinishedPulling="2025-01-17 12:07:13.564242666 +0000 UTC m=+9.180534803" observedRunningTime="2025-01-17 12:07:14.711488364 +0000 UTC m=+10.327780521" watchObservedRunningTime="2025-01-17 12:07:14.711662353 +0000 UTC m=+10.327954490" Jan 17 12:07:16.907673 kubelet[2497]: E0117 12:07:16.907624 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:17.589582 systemd[1]: Created slice kubepods-besteffort-pod8e520d75_3709_48f9_8832_e8bc2fa02696.slice - libcontainer container kubepods-besteffort-pod8e520d75_3709_48f9_8832_e8bc2fa02696.slice. Jan 17 12:07:17.665936 systemd[1]: Created slice kubepods-besteffort-pod83bc44af_a064_4533_826b_4405f7d08fe3.slice - libcontainer container kubepods-besteffort-pod83bc44af_a064_4533_826b_4405f7d08fe3.slice. Jan 17 12:07:17.676751 kubelet[2497]: I0117 12:07:17.676614 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e520d75-3709-48f9-8832-e8bc2fa02696-tigera-ca-bundle\") pod \"calico-typha-7b77c8df8-sj69k\" (UID: \"8e520d75-3709-48f9-8832-e8bc2fa02696\") " pod="calico-system/calico-typha-7b77c8df8-sj69k" Jan 17 12:07:17.676751 kubelet[2497]: I0117 12:07:17.676686 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxn6g\" (UniqueName: \"kubernetes.io/projected/8e520d75-3709-48f9-8832-e8bc2fa02696-kube-api-access-wxn6g\") pod \"calico-typha-7b77c8df8-sj69k\" (UID: \"8e520d75-3709-48f9-8832-e8bc2fa02696\") " pod="calico-system/calico-typha-7b77c8df8-sj69k" Jan 17 12:07:17.676751 kubelet[2497]: I0117 12:07:17.676708 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e520d75-3709-48f9-8832-e8bc2fa02696-typha-certs\") pod \"calico-typha-7b77c8df8-sj69k\" (UID: \"8e520d75-3709-48f9-8832-e8bc2fa02696\") " pod="calico-system/calico-typha-7b77c8df8-sj69k" Jan 17 12:07:17.778432 kubelet[2497]: I0117 12:07:17.777451 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-policysync\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778432 kubelet[2497]: I0117 12:07:17.777545 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-lib-modules\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778432 kubelet[2497]: I0117 12:07:17.777570 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83bc44af-a064-4533-826b-4405f7d08fe3-tigera-ca-bundle\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778432 kubelet[2497]: I0117 12:07:17.777592 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-cni-log-dir\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778432 kubelet[2497]: I0117 12:07:17.777612 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-flexvol-driver-host\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778733 kubelet[2497]: I0117 12:07:17.777653 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/83bc44af-a064-4533-826b-4405f7d08fe3-node-certs\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778733 kubelet[2497]: I0117 12:07:17.777673 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-var-run-calico\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778733 kubelet[2497]: I0117 12:07:17.777695 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-cni-net-dir\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778733 kubelet[2497]: I0117 12:07:17.777729 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-xtables-lock\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778733 kubelet[2497]: I0117 12:07:17.777749 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-var-lib-calico\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778913 kubelet[2497]: I0117 12:07:17.777767 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/83bc44af-a064-4533-826b-4405f7d08fe3-cni-bin-dir\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.778913 kubelet[2497]: I0117 12:07:17.777788 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fscc2\" (UniqueName: \"kubernetes.io/projected/83bc44af-a064-4533-826b-4405f7d08fe3-kube-api-access-fscc2\") pod \"calico-node-ctg9m\" (UID: \"83bc44af-a064-4533-826b-4405f7d08fe3\") " pod="calico-system/calico-node-ctg9m" Jan 17 12:07:17.782824 kubelet[2497]: E0117 12:07:17.782772 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:17.883634 kubelet[2497]: E0117 12:07:17.882064 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.883634 kubelet[2497]: W0117 12:07:17.882094 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.883634 kubelet[2497]: E0117 12:07:17.882115 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.885778 kubelet[2497]: E0117 12:07:17.885739 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.885778 kubelet[2497]: W0117 12:07:17.885768 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.885986 kubelet[2497]: E0117 12:07:17.885802 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.890638 kubelet[2497]: E0117 12:07:17.890594 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.890638 kubelet[2497]: W0117 12:07:17.890628 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.890726 kubelet[2497]: E0117 12:07:17.890658 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.894755 kubelet[2497]: E0117 12:07:17.894665 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:17.895326 containerd[1461]: time="2025-01-17T12:07:17.895285874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b77c8df8-sj69k,Uid:8e520d75-3709-48f9-8832-e8bc2fa02696,Namespace:calico-system,Attempt:0,}" Jan 17 12:07:17.931238 containerd[1461]: time="2025-01-17T12:07:17.931078620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:17.931238 containerd[1461]: time="2025-01-17T12:07:17.931162409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:17.931238 containerd[1461]: time="2025-01-17T12:07:17.931175985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:17.931559 containerd[1461]: time="2025-01-17T12:07:17.931285862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:17.956700 systemd[1]: Started cri-containerd-0ac3338c2131e9140df4e2620d2d8a2fc0a9c628829416eaf97a56ec2a587d57.scope - libcontainer container 0ac3338c2131e9140df4e2620d2d8a2fc0a9c628829416eaf97a56ec2a587d57. Jan 17 12:07:17.971836 kubelet[2497]: E0117 12:07:17.970900 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:17.973245 containerd[1461]: time="2025-01-17T12:07:17.972663644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ctg9m,Uid:83bc44af-a064-4533-826b-4405f7d08fe3,Namespace:calico-system,Attempt:0,}" Jan 17 12:07:17.979476 kubelet[2497]: E0117 12:07:17.979426 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.979476 kubelet[2497]: W0117 12:07:17.979459 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.979476 kubelet[2497]: E0117 12:07:17.979481 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.979736 kubelet[2497]: I0117 12:07:17.979572 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/72572da8-b025-4046-8056-05fcf0914c02-registration-dir\") pod \"csi-node-driver-pjd54\" (UID: \"72572da8-b025-4046-8056-05fcf0914c02\") " pod="calico-system/csi-node-driver-pjd54" Jan 17 12:07:17.979941 kubelet[2497]: E0117 12:07:17.979911 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.979941 kubelet[2497]: W0117 12:07:17.979931 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.980112 kubelet[2497]: E0117 12:07:17.979950 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.980162 kubelet[2497]: I0117 12:07:17.980122 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqjm\" (UniqueName: \"kubernetes.io/projected/72572da8-b025-4046-8056-05fcf0914c02-kube-api-access-2xqjm\") pod \"csi-node-driver-pjd54\" (UID: \"72572da8-b025-4046-8056-05fcf0914c02\") " pod="calico-system/csi-node-driver-pjd54" Jan 17 12:07:17.980255 kubelet[2497]: E0117 12:07:17.980230 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.980255 kubelet[2497]: W0117 12:07:17.980250 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.980319 kubelet[2497]: E0117 12:07:17.980281 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.980575 kubelet[2497]: E0117 12:07:17.980536 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.980575 kubelet[2497]: W0117 12:07:17.980551 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.980672 kubelet[2497]: E0117 12:07:17.980601 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.981030 kubelet[2497]: E0117 12:07:17.981002 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.981030 kubelet[2497]: W0117 12:07:17.981019 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.981228 kubelet[2497]: E0117 12:07:17.981038 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.981338 kubelet[2497]: I0117 12:07:17.981310 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/72572da8-b025-4046-8056-05fcf0914c02-varrun\") pod \"csi-node-driver-pjd54\" (UID: \"72572da8-b025-4046-8056-05fcf0914c02\") " pod="calico-system/csi-node-driver-pjd54" Jan 17 12:07:17.981617 kubelet[2497]: E0117 12:07:17.981592 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.981617 kubelet[2497]: W0117 12:07:17.981608 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.981716 kubelet[2497]: E0117 12:07:17.981645 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.982080 kubelet[2497]: E0117 12:07:17.982056 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.982080 kubelet[2497]: W0117 12:07:17.982071 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.982173 kubelet[2497]: E0117 12:07:17.982095 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.982636 kubelet[2497]: E0117 12:07:17.982580 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.982636 kubelet[2497]: W0117 12:07:17.982595 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.982636 kubelet[2497]: E0117 12:07:17.982627 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.982965 kubelet[2497]: E0117 12:07:17.982942 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.982965 kubelet[2497]: W0117 12:07:17.982955 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.983561 kubelet[2497]: E0117 12:07:17.983316 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.983689 kubelet[2497]: E0117 12:07:17.983662 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.983689 kubelet[2497]: W0117 12:07:17.983680 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.983689 kubelet[2497]: E0117 12:07:17.983690 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.983813 kubelet[2497]: I0117 12:07:17.983790 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72572da8-b025-4046-8056-05fcf0914c02-kubelet-dir\") pod \"csi-node-driver-pjd54\" (UID: \"72572da8-b025-4046-8056-05fcf0914c02\") " pod="calico-system/csi-node-driver-pjd54" Jan 17 12:07:17.984544 kubelet[2497]: E0117 12:07:17.984499 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.984640 kubelet[2497]: W0117 12:07:17.984565 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.984680 kubelet[2497]: E0117 12:07:17.984642 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.984680 kubelet[2497]: I0117 12:07:17.984658 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/72572da8-b025-4046-8056-05fcf0914c02-socket-dir\") pod \"csi-node-driver-pjd54\" (UID: \"72572da8-b025-4046-8056-05fcf0914c02\") " pod="calico-system/csi-node-driver-pjd54" Jan 17 12:07:17.985269 kubelet[2497]: E0117 12:07:17.985211 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.985478 kubelet[2497]: W0117 12:07:17.985360 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.985478 kubelet[2497]: E0117 12:07:17.985430 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.987547 kubelet[2497]: E0117 12:07:17.985936 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.987547 kubelet[2497]: W0117 12:07:17.985951 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.987861 kubelet[2497]: E0117 12:07:17.987744 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.988061 kubelet[2497]: E0117 12:07:17.988046 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.988206 kubelet[2497]: W0117 12:07:17.988132 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.988206 kubelet[2497]: E0117 12:07:17.988151 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:17.988635 kubelet[2497]: E0117 12:07:17.988582 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:17.988635 kubelet[2497]: W0117 12:07:17.988599 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:17.988635 kubelet[2497]: E0117 12:07:17.988611 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.008286 containerd[1461]: time="2025-01-17T12:07:18.008121512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:18.008596 containerd[1461]: time="2025-01-17T12:07:18.008190743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:18.008596 containerd[1461]: time="2025-01-17T12:07:18.008325026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:18.010355 containerd[1461]: time="2025-01-17T12:07:18.008595186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:18.020636 containerd[1461]: time="2025-01-17T12:07:18.020570509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b77c8df8-sj69k,Uid:8e520d75-3709-48f9-8832-e8bc2fa02696,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ac3338c2131e9140df4e2620d2d8a2fc0a9c628829416eaf97a56ec2a587d57\"" Jan 17 12:07:18.021869 kubelet[2497]: E0117 12:07:18.021837 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:18.024005 containerd[1461]: time="2025-01-17T12:07:18.023970015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:07:18.033785 systemd[1]: Started cri-containerd-ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238.scope - libcontainer container ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238. Jan 17 12:07:18.067617 containerd[1461]: time="2025-01-17T12:07:18.067554690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ctg9m,Uid:83bc44af-a064-4533-826b-4405f7d08fe3,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238\"" Jan 17 12:07:18.068844 kubelet[2497]: E0117 12:07:18.068732 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:18.085497 kubelet[2497]: E0117 12:07:18.085458 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.085497 kubelet[2497]: W0117 12:07:18.085482 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.085497 kubelet[2497]: E0117 12:07:18.085503 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.085805 kubelet[2497]: E0117 12:07:18.085787 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.085805 kubelet[2497]: W0117 12:07:18.085800 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.085876 kubelet[2497]: E0117 12:07:18.085811 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.086084 kubelet[2497]: E0117 12:07:18.086046 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.086084 kubelet[2497]: W0117 12:07:18.086062 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.086084 kubelet[2497]: E0117 12:07:18.086076 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.086443 kubelet[2497]: E0117 12:07:18.086424 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.086443 kubelet[2497]: W0117 12:07:18.086436 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.086509 kubelet[2497]: E0117 12:07:18.086452 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.086842 kubelet[2497]: E0117 12:07:18.086814 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.086882 kubelet[2497]: W0117 12:07:18.086842 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.086909 kubelet[2497]: E0117 12:07:18.086881 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.087124 kubelet[2497]: E0117 12:07:18.087108 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.087124 kubelet[2497]: W0117 12:07:18.087120 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.087188 kubelet[2497]: E0117 12:07:18.087134 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.087356 kubelet[2497]: E0117 12:07:18.087340 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.087390 kubelet[2497]: W0117 12:07:18.087367 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.087420 kubelet[2497]: E0117 12:07:18.087397 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.087610 kubelet[2497]: E0117 12:07:18.087589 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.087610 kubelet[2497]: W0117 12:07:18.087601 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.087678 kubelet[2497]: E0117 12:07:18.087628 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.087814 kubelet[2497]: E0117 12:07:18.087794 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.087814 kubelet[2497]: W0117 12:07:18.087805 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.087866 kubelet[2497]: E0117 12:07:18.087836 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.088016 kubelet[2497]: E0117 12:07:18.087997 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.088016 kubelet[2497]: W0117 12:07:18.088009 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.088069 kubelet[2497]: E0117 12:07:18.088040 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.088537 kubelet[2497]: E0117 12:07:18.088475 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.088537 kubelet[2497]: W0117 12:07:18.088517 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.088904 kubelet[2497]: E0117 12:07:18.088867 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.088904 kubelet[2497]: W0117 12:07:18.088895 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.089116 kubelet[2497]: E0117 12:07:18.089093 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.089116 kubelet[2497]: W0117 12:07:18.089107 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.089390 kubelet[2497]: E0117 12:07:18.089365 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.089390 kubelet[2497]: W0117 12:07:18.089381 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.089622 kubelet[2497]: E0117 12:07:18.089604 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.089622 kubelet[2497]: W0117 12:07:18.089616 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.089686 kubelet[2497]: E0117 12:07:18.089634 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.089863 kubelet[2497]: E0117 12:07:18.089847 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.089863 kubelet[2497]: W0117 12:07:18.089859 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.089924 kubelet[2497]: E0117 12:07:18.089894 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.090107 kubelet[2497]: E0117 12:07:18.090091 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.090107 kubelet[2497]: W0117 12:07:18.090104 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.090167 kubelet[2497]: E0117 12:07:18.090119 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.090362 kubelet[2497]: E0117 12:07:18.090345 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.090401 kubelet[2497]: W0117 12:07:18.090358 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.090401 kubelet[2497]: E0117 12:07:18.090375 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.090683 kubelet[2497]: E0117 12:07:18.090559 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.090683 kubelet[2497]: E0117 12:07:18.090594 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.090763 kubelet[2497]: E0117 12:07:18.090739 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.090798 kubelet[2497]: E0117 12:07:18.090764 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.090980 kubelet[2497]: E0117 12:07:18.090956 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.091134 kubelet[2497]: W0117 12:07:18.091045 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.091134 kubelet[2497]: E0117 12:07:18.091064 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.091335 kubelet[2497]: E0117 12:07:18.091321 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.091414 kubelet[2497]: W0117 12:07:18.091385 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.091414 kubelet[2497]: E0117 12:07:18.091401 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.091663 kubelet[2497]: E0117 12:07:18.091646 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.091663 kubelet[2497]: W0117 12:07:18.091661 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.091723 kubelet[2497]: E0117 12:07:18.091671 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.091913 kubelet[2497]: E0117 12:07:18.091885 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.091913 kubelet[2497]: W0117 12:07:18.091906 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.091998 kubelet[2497]: E0117 12:07:18.091921 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.092208 kubelet[2497]: E0117 12:07:18.092191 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.092208 kubelet[2497]: W0117 12:07:18.092205 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.092276 kubelet[2497]: E0117 12:07:18.092230 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.092457 kubelet[2497]: E0117 12:07:18.092439 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.092490 kubelet[2497]: W0117 12:07:18.092453 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.092490 kubelet[2497]: E0117 12:07:18.092469 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.092918 kubelet[2497]: E0117 12:07:18.092902 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.093732 kubelet[2497]: W0117 12:07:18.092969 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.093732 kubelet[2497]: E0117 12:07:18.092983 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.100605 kubelet[2497]: E0117 12:07:18.100588 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.100605 kubelet[2497]: W0117 12:07:18.100601 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.100706 kubelet[2497]: E0117 12:07:18.100612 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.532150 kubelet[2497]: E0117 12:07:18.532096 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:18.583002 kubelet[2497]: E0117 12:07:18.582963 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.583002 kubelet[2497]: W0117 12:07:18.582988 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.583002 kubelet[2497]: E0117 12:07:18.583009 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.583346 kubelet[2497]: E0117 12:07:18.583328 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.583346 kubelet[2497]: W0117 12:07:18.583338 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.583346 kubelet[2497]: E0117 12:07:18.583347 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.583649 kubelet[2497]: E0117 12:07:18.583608 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.583649 kubelet[2497]: W0117 12:07:18.583631 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.583649 kubelet[2497]: E0117 12:07:18.583643 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.583874 kubelet[2497]: E0117 12:07:18.583857 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.583874 kubelet[2497]: W0117 12:07:18.583869 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.583961 kubelet[2497]: E0117 12:07:18.583878 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.584143 kubelet[2497]: E0117 12:07:18.584126 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.584143 kubelet[2497]: W0117 12:07:18.584137 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.584143 kubelet[2497]: E0117 12:07:18.584146 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.584349 kubelet[2497]: E0117 12:07:18.584334 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.584349 kubelet[2497]: W0117 12:07:18.584345 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.584421 kubelet[2497]: E0117 12:07:18.584354 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.584630 kubelet[2497]: E0117 12:07:18.584598 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.584630 kubelet[2497]: W0117 12:07:18.584626 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.584721 kubelet[2497]: E0117 12:07:18.584655 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.585035 kubelet[2497]: E0117 12:07:18.584939 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.585035 kubelet[2497]: W0117 12:07:18.584953 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.585035 kubelet[2497]: E0117 12:07:18.584961 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.585276 kubelet[2497]: E0117 12:07:18.585259 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.585276 kubelet[2497]: W0117 12:07:18.585272 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.585339 kubelet[2497]: E0117 12:07:18.585284 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.585517 kubelet[2497]: E0117 12:07:18.585502 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.585517 kubelet[2497]: W0117 12:07:18.585513 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.585598 kubelet[2497]: E0117 12:07:18.585536 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.585838 kubelet[2497]: E0117 12:07:18.585804 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.585880 kubelet[2497]: W0117 12:07:18.585838 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.585880 kubelet[2497]: E0117 12:07:18.585868 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.586164 kubelet[2497]: E0117 12:07:18.586148 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.586164 kubelet[2497]: W0117 12:07:18.586160 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.586243 kubelet[2497]: E0117 12:07:18.586170 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.586422 kubelet[2497]: E0117 12:07:18.586405 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.586422 kubelet[2497]: W0117 12:07:18.586417 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.586473 kubelet[2497]: E0117 12:07:18.586427 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.586843 kubelet[2497]: E0117 12:07:18.586713 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.586843 kubelet[2497]: W0117 12:07:18.586738 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.586843 kubelet[2497]: E0117 12:07:18.586749 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:18.587136 kubelet[2497]: E0117 12:07:18.587117 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:18.587136 kubelet[2497]: W0117 12:07:18.587129 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:18.587238 kubelet[2497]: E0117 12:07:18.587139 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.478938 kubelet[2497]: E0117 12:07:19.478874 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:19.524174 kubelet[2497]: E0117 12:07:19.523679 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:19.593711 kubelet[2497]: E0117 12:07:19.593659 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.593711 kubelet[2497]: W0117 12:07:19.593686 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.593711 kubelet[2497]: E0117 12:07:19.593712 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.593953 kubelet[2497]: E0117 12:07:19.593930 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.593953 kubelet[2497]: W0117 12:07:19.593946 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.594019 kubelet[2497]: E0117 12:07:19.593960 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.594212 kubelet[2497]: E0117 12:07:19.594181 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.594212 kubelet[2497]: W0117 12:07:19.594205 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.594267 kubelet[2497]: E0117 12:07:19.594216 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.594450 kubelet[2497]: E0117 12:07:19.594427 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.594450 kubelet[2497]: W0117 12:07:19.594443 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.594503 kubelet[2497]: E0117 12:07:19.594453 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.594710 kubelet[2497]: E0117 12:07:19.594688 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.594710 kubelet[2497]: W0117 12:07:19.594705 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.594768 kubelet[2497]: E0117 12:07:19.594716 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.594958 kubelet[2497]: E0117 12:07:19.594935 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.594958 kubelet[2497]: W0117 12:07:19.594951 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.595008 kubelet[2497]: E0117 12:07:19.594961 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.595188 kubelet[2497]: E0117 12:07:19.595167 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.595188 kubelet[2497]: W0117 12:07:19.595181 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.595188 kubelet[2497]: E0117 12:07:19.595202 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.595447 kubelet[2497]: E0117 12:07:19.595429 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.595447 kubelet[2497]: W0117 12:07:19.595443 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.595515 kubelet[2497]: E0117 12:07:19.595454 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.595705 kubelet[2497]: E0117 12:07:19.595688 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.595705 kubelet[2497]: W0117 12:07:19.595699 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.595756 kubelet[2497]: E0117 12:07:19.595708 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.595902 kubelet[2497]: E0117 12:07:19.595887 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.595902 kubelet[2497]: W0117 12:07:19.595897 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.595953 kubelet[2497]: E0117 12:07:19.595908 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.596112 kubelet[2497]: E0117 12:07:19.596094 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.596112 kubelet[2497]: W0117 12:07:19.596108 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.596152 kubelet[2497]: E0117 12:07:19.596118 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.596364 kubelet[2497]: E0117 12:07:19.596336 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.596364 kubelet[2497]: W0117 12:07:19.596352 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.596420 kubelet[2497]: E0117 12:07:19.596364 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.596613 kubelet[2497]: E0117 12:07:19.596594 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.596613 kubelet[2497]: W0117 12:07:19.596609 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.596661 kubelet[2497]: E0117 12:07:19.596620 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.596853 kubelet[2497]: E0117 12:07:19.596835 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.596853 kubelet[2497]: W0117 12:07:19.596848 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.596893 kubelet[2497]: E0117 12:07:19.596858 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:19.597064 kubelet[2497]: E0117 12:07:19.597048 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:19.597064 kubelet[2497]: W0117 12:07:19.597059 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:19.597112 kubelet[2497]: E0117 12:07:19.597067 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:20.152887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205998245.mount: Deactivated successfully. Jan 17 12:07:21.478720 kubelet[2497]: E0117 12:07:21.478641 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:21.958402 containerd[1461]: time="2025-01-17T12:07:21.958318573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:21.959162 containerd[1461]: time="2025-01-17T12:07:21.959105396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:07:21.960623 containerd[1461]: time="2025-01-17T12:07:21.960586058Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:21.963291 containerd[1461]: time="2025-01-17T12:07:21.963219453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:21.964079 containerd[1461]: time="2025-01-17T12:07:21.964019812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.940012507s" Jan 17 12:07:21.964183 containerd[1461]: time="2025-01-17T12:07:21.964079294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:07:21.965376 containerd[1461]: time="2025-01-17T12:07:21.965323249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:07:21.975252 containerd[1461]: time="2025-01-17T12:07:21.975015993Z" level=info msg="CreateContainer within sandbox \"0ac3338c2131e9140df4e2620d2d8a2fc0a9c628829416eaf97a56ec2a587d57\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:07:21.995957 containerd[1461]: time="2025-01-17T12:07:21.995884777Z" level=info msg="CreateContainer within sandbox \"0ac3338c2131e9140df4e2620d2d8a2fc0a9c628829416eaf97a56ec2a587d57\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"844252439e79bd87539271959b2d783b8140f93465bac01354e681f265363aa2\"" Jan 17 12:07:21.996513 containerd[1461]: time="2025-01-17T12:07:21.996492885Z" level=info msg="StartContainer for \"844252439e79bd87539271959b2d783b8140f93465bac01354e681f265363aa2\"" Jan 17 12:07:22.032682 systemd[1]: Started cri-containerd-844252439e79bd87539271959b2d783b8140f93465bac01354e681f265363aa2.scope - libcontainer container 844252439e79bd87539271959b2d783b8140f93465bac01354e681f265363aa2. Jan 17 12:07:22.075353 containerd[1461]: time="2025-01-17T12:07:22.075294344Z" level=info msg="StartContainer for \"844252439e79bd87539271959b2d783b8140f93465bac01354e681f265363aa2\" returns successfully" Jan 17 12:07:22.530367 kubelet[2497]: E0117 12:07:22.530326 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:22.539703 kubelet[2497]: I0117 12:07:22.539298 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b77c8df8-sj69k" podStartSLOduration=1.597659824 podStartE2EDuration="5.539282616s" podCreationTimestamp="2025-01-17 12:07:17 +0000 UTC" firstStartedPulling="2025-01-17 12:07:18.023428885 +0000 UTC m=+13.639721022" lastFinishedPulling="2025-01-17 12:07:21.965051677 +0000 UTC m=+17.581343814" observedRunningTime="2025-01-17 12:07:22.538728421 +0000 UTC m=+18.155020558" watchObservedRunningTime="2025-01-17 12:07:22.539282616 +0000 UTC m=+18.155574753" Jan 17 12:07:22.622311 kubelet[2497]: E0117 12:07:22.622254 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.622311 kubelet[2497]: W0117 12:07:22.622284 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.622311 kubelet[2497]: E0117 12:07:22.622307 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.622571 kubelet[2497]: E0117 12:07:22.622500 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.622571 kubelet[2497]: W0117 12:07:22.622508 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.622571 kubelet[2497]: E0117 12:07:22.622517 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.622775 kubelet[2497]: E0117 12:07:22.622750 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.622775 kubelet[2497]: W0117 12:07:22.622763 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.622775 kubelet[2497]: E0117 12:07:22.622772 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.622994 kubelet[2497]: E0117 12:07:22.622970 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.622994 kubelet[2497]: W0117 12:07:22.622983 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.622994 kubelet[2497]: E0117 12:07:22.622992 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.623314 kubelet[2497]: E0117 12:07:22.623289 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.623314 kubelet[2497]: W0117 12:07:22.623303 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.623501 kubelet[2497]: E0117 12:07:22.623316 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.623548 kubelet[2497]: E0117 12:07:22.623506 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.623548 kubelet[2497]: W0117 12:07:22.623514 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.623548 kubelet[2497]: E0117 12:07:22.623541 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.623754 kubelet[2497]: E0117 12:07:22.623730 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.623754 kubelet[2497]: W0117 12:07:22.623742 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.623754 kubelet[2497]: E0117 12:07:22.623751 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.623950 kubelet[2497]: E0117 12:07:22.623927 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.623950 kubelet[2497]: W0117 12:07:22.623939 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.623950 kubelet[2497]: E0117 12:07:22.623947 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.624187 kubelet[2497]: E0117 12:07:22.624164 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.624187 kubelet[2497]: W0117 12:07:22.624175 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.624187 kubelet[2497]: E0117 12:07:22.624183 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.624391 kubelet[2497]: E0117 12:07:22.624367 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.624391 kubelet[2497]: W0117 12:07:22.624381 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.624441 kubelet[2497]: E0117 12:07:22.624392 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.624635 kubelet[2497]: E0117 12:07:22.624610 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.624635 kubelet[2497]: W0117 12:07:22.624624 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.624635 kubelet[2497]: E0117 12:07:22.624632 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.624835 kubelet[2497]: E0117 12:07:22.624818 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.624835 kubelet[2497]: W0117 12:07:22.624828 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.624882 kubelet[2497]: E0117 12:07:22.624836 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.625073 kubelet[2497]: E0117 12:07:22.625057 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.625073 kubelet[2497]: W0117 12:07:22.625068 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.625154 kubelet[2497]: E0117 12:07:22.625077 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.625285 kubelet[2497]: E0117 12:07:22.625272 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.625285 kubelet[2497]: W0117 12:07:22.625282 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.625331 kubelet[2497]: E0117 12:07:22.625289 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.625481 kubelet[2497]: E0117 12:07:22.625467 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.625481 kubelet[2497]: W0117 12:07:22.625478 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.625551 kubelet[2497]: E0117 12:07:22.625486 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.625789 kubelet[2497]: E0117 12:07:22.625767 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.625789 kubelet[2497]: W0117 12:07:22.625779 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.625789 kubelet[2497]: E0117 12:07:22.625788 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.626103 kubelet[2497]: E0117 12:07:22.626061 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.626103 kubelet[2497]: W0117 12:07:22.626089 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.626303 kubelet[2497]: E0117 12:07:22.626120 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.626394 kubelet[2497]: E0117 12:07:22.626379 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.626394 kubelet[2497]: W0117 12:07:22.626391 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.626443 kubelet[2497]: E0117 12:07:22.626402 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.626658 kubelet[2497]: E0117 12:07:22.626613 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.626658 kubelet[2497]: W0117 12:07:22.626631 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.626658 kubelet[2497]: E0117 12:07:22.626648 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.626928 kubelet[2497]: E0117 12:07:22.626911 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.626980 kubelet[2497]: W0117 12:07:22.626927 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.626980 kubelet[2497]: E0117 12:07:22.626950 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.627217 kubelet[2497]: E0117 12:07:22.627191 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.627217 kubelet[2497]: W0117 12:07:22.627206 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.627278 kubelet[2497]: E0117 12:07:22.627222 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.627458 kubelet[2497]: E0117 12:07:22.627444 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.627486 kubelet[2497]: W0117 12:07:22.627460 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.627486 kubelet[2497]: E0117 12:07:22.627479 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.627721 kubelet[2497]: E0117 12:07:22.627708 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.627721 kubelet[2497]: W0117 12:07:22.627720 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.627805 kubelet[2497]: E0117 12:07:22.627748 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.627941 kubelet[2497]: E0117 12:07:22.627928 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.627941 kubelet[2497]: W0117 12:07:22.627939 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.628008 kubelet[2497]: E0117 12:07:22.627974 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.628190 kubelet[2497]: E0117 12:07:22.628171 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.628190 kubelet[2497]: W0117 12:07:22.628186 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.628363 kubelet[2497]: E0117 12:07:22.628203 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.628461 kubelet[2497]: E0117 12:07:22.628443 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.628461 kubelet[2497]: W0117 12:07:22.628456 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.628511 kubelet[2497]: E0117 12:07:22.628471 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.628717 kubelet[2497]: E0117 12:07:22.628701 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.628717 kubelet[2497]: W0117 12:07:22.628711 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.628784 kubelet[2497]: E0117 12:07:22.628727 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.629038 kubelet[2497]: E0117 12:07:22.629019 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.629038 kubelet[2497]: W0117 12:07:22.629034 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.629095 kubelet[2497]: E0117 12:07:22.629048 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.629279 kubelet[2497]: E0117 12:07:22.629261 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.629279 kubelet[2497]: W0117 12:07:22.629272 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.629339 kubelet[2497]: E0117 12:07:22.629286 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.629497 kubelet[2497]: E0117 12:07:22.629480 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.629497 kubelet[2497]: W0117 12:07:22.629493 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.629572 kubelet[2497]: E0117 12:07:22.629509 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.629759 kubelet[2497]: E0117 12:07:22.629742 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.629759 kubelet[2497]: W0117 12:07:22.629757 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.629833 kubelet[2497]: E0117 12:07:22.629769 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.630014 kubelet[2497]: E0117 12:07:22.629999 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.630014 kubelet[2497]: W0117 12:07:22.630009 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.630068 kubelet[2497]: E0117 12:07:22.630019 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:22.630598 kubelet[2497]: E0117 12:07:22.630576 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:22.630598 kubelet[2497]: W0117 12:07:22.630591 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:22.630701 kubelet[2497]: E0117 12:07:22.630602 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.479160 kubelet[2497]: E0117 12:07:23.479085 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:23.531767 kubelet[2497]: I0117 12:07:23.531718 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:07:23.532257 kubelet[2497]: E0117 12:07:23.532103 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:23.632660 kubelet[2497]: E0117 12:07:23.632604 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.632660 kubelet[2497]: W0117 12:07:23.632655 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.632660 kubelet[2497]: E0117 12:07:23.632677 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.632924 kubelet[2497]: E0117 12:07:23.632917 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.632953 kubelet[2497]: W0117 12:07:23.632928 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.632953 kubelet[2497]: E0117 12:07:23.632939 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.633214 kubelet[2497]: E0117 12:07:23.633184 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.633214 kubelet[2497]: W0117 12:07:23.633201 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.633270 kubelet[2497]: E0117 12:07:23.633220 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.633474 kubelet[2497]: E0117 12:07:23.633446 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.633474 kubelet[2497]: W0117 12:07:23.633462 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.633474 kubelet[2497]: E0117 12:07:23.633472 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.633734 kubelet[2497]: E0117 12:07:23.633707 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.633734 kubelet[2497]: W0117 12:07:23.633725 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.633789 kubelet[2497]: E0117 12:07:23.633735 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.634000 kubelet[2497]: E0117 12:07:23.633980 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.634000 kubelet[2497]: W0117 12:07:23.633995 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.634059 kubelet[2497]: E0117 12:07:23.634005 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.634284 kubelet[2497]: E0117 12:07:23.634264 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.634284 kubelet[2497]: W0117 12:07:23.634278 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.634343 kubelet[2497]: E0117 12:07:23.634289 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.634542 kubelet[2497]: E0117 12:07:23.634512 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.634572 kubelet[2497]: W0117 12:07:23.634554 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.634572 kubelet[2497]: E0117 12:07:23.634566 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.634814 kubelet[2497]: E0117 12:07:23.634794 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.634814 kubelet[2497]: W0117 12:07:23.634809 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.634871 kubelet[2497]: E0117 12:07:23.634819 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.635045 kubelet[2497]: E0117 12:07:23.635026 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.635045 kubelet[2497]: W0117 12:07:23.635040 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.635100 kubelet[2497]: E0117 12:07:23.635051 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.635288 kubelet[2497]: E0117 12:07:23.635269 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.635288 kubelet[2497]: W0117 12:07:23.635283 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.635347 kubelet[2497]: E0117 12:07:23.635294 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.635575 kubelet[2497]: E0117 12:07:23.635554 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.635575 kubelet[2497]: W0117 12:07:23.635570 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.635632 kubelet[2497]: E0117 12:07:23.635581 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.635831 kubelet[2497]: E0117 12:07:23.635811 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.635831 kubelet[2497]: W0117 12:07:23.635826 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.635888 kubelet[2497]: E0117 12:07:23.635836 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.636068 kubelet[2497]: E0117 12:07:23.636049 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.636068 kubelet[2497]: W0117 12:07:23.636063 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.636115 kubelet[2497]: E0117 12:07:23.636074 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.636317 kubelet[2497]: E0117 12:07:23.636298 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.636317 kubelet[2497]: W0117 12:07:23.636312 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.636377 kubelet[2497]: E0117 12:07:23.636327 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.733197 kubelet[2497]: E0117 12:07:23.733018 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.733197 kubelet[2497]: W0117 12:07:23.733052 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.733197 kubelet[2497]: E0117 12:07:23.733076 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.733412 kubelet[2497]: E0117 12:07:23.733333 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.733412 kubelet[2497]: W0117 12:07:23.733346 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.733412 kubelet[2497]: E0117 12:07:23.733365 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.733818 kubelet[2497]: E0117 12:07:23.733602 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.733818 kubelet[2497]: W0117 12:07:23.733612 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.733818 kubelet[2497]: E0117 12:07:23.733622 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.734163 kubelet[2497]: E0117 12:07:23.734112 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.734163 kubelet[2497]: W0117 12:07:23.734152 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.734345 kubelet[2497]: E0117 12:07:23.734186 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.734593 kubelet[2497]: E0117 12:07:23.734541 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.734593 kubelet[2497]: W0117 12:07:23.734556 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.734593 kubelet[2497]: E0117 12:07:23.734570 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.734888 kubelet[2497]: E0117 12:07:23.734757 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.734888 kubelet[2497]: W0117 12:07:23.734765 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.734888 kubelet[2497]: E0117 12:07:23.734789 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.734996 kubelet[2497]: E0117 12:07:23.734919 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.734996 kubelet[2497]: W0117 12:07:23.734935 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.734996 kubelet[2497]: E0117 12:07:23.734969 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.735114 kubelet[2497]: E0117 12:07:23.735097 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.735114 kubelet[2497]: W0117 12:07:23.735109 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.735194 kubelet[2497]: E0117 12:07:23.735166 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.735362 kubelet[2497]: E0117 12:07:23.735342 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.735362 kubelet[2497]: W0117 12:07:23.735357 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.735472 kubelet[2497]: E0117 12:07:23.735376 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.735664 kubelet[2497]: E0117 12:07:23.735647 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.735664 kubelet[2497]: W0117 12:07:23.735662 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.735738 kubelet[2497]: E0117 12:07:23.735687 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.735937 kubelet[2497]: E0117 12:07:23.735924 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.735937 kubelet[2497]: W0117 12:07:23.735935 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.735987 kubelet[2497]: E0117 12:07:23.735949 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.736204 kubelet[2497]: E0117 12:07:23.736185 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.736204 kubelet[2497]: W0117 12:07:23.736199 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.736286 kubelet[2497]: E0117 12:07:23.736215 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.736558 kubelet[2497]: E0117 12:07:23.736520 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.736558 kubelet[2497]: W0117 12:07:23.736554 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.736658 kubelet[2497]: E0117 12:07:23.736572 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.736825 kubelet[2497]: E0117 12:07:23.736807 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.736825 kubelet[2497]: W0117 12:07:23.736823 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.736885 kubelet[2497]: E0117 12:07:23.736838 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.737065 kubelet[2497]: E0117 12:07:23.737048 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.737065 kubelet[2497]: W0117 12:07:23.737062 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.737107 kubelet[2497]: E0117 12:07:23.737077 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.737344 kubelet[2497]: E0117 12:07:23.737315 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.737344 kubelet[2497]: W0117 12:07:23.737335 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.737398 kubelet[2497]: E0117 12:07:23.737351 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.737616 kubelet[2497]: E0117 12:07:23.737599 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.737640 kubelet[2497]: W0117 12:07:23.737616 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.737640 kubelet[2497]: E0117 12:07:23.737627 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:23.738055 kubelet[2497]: E0117 12:07:23.738031 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:07:23.738055 kubelet[2497]: W0117 12:07:23.738047 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:07:23.738108 kubelet[2497]: E0117 12:07:23.738058 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:07:24.141664 containerd[1461]: time="2025-01-17T12:07:24.141579258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:24.142273 containerd[1461]: time="2025-01-17T12:07:24.142209085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:07:24.143441 containerd[1461]: time="2025-01-17T12:07:24.143409686Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:24.145986 containerd[1461]: time="2025-01-17T12:07:24.145941224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:24.146714 containerd[1461]: time="2025-01-17T12:07:24.146663895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.181290242s" Jan 17 12:07:24.146751 containerd[1461]: time="2025-01-17T12:07:24.146720402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:07:24.149091 containerd[1461]: time="2025-01-17T12:07:24.149048708Z" level=info msg="CreateContainer within sandbox \"ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:07:24.166802 containerd[1461]: time="2025-01-17T12:07:24.166734951Z" level=info msg="CreateContainer within sandbox \"ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f\"" Jan 17 12:07:24.167499 containerd[1461]: time="2025-01-17T12:07:24.167437193Z" level=info msg="StartContainer for \"b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f\"" Jan 17 12:07:24.201723 systemd[1]: Started cri-containerd-b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f.scope - libcontainer container b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f. Jan 17 12:07:24.237967 containerd[1461]: time="2025-01-17T12:07:24.237796070Z" level=info msg="StartContainer for \"b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f\" returns successfully" Jan 17 12:07:24.255223 systemd[1]: cri-containerd-b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f.scope: Deactivated successfully. Jan 17 12:07:24.286781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f-rootfs.mount: Deactivated successfully. Jan 17 12:07:24.607229 kubelet[2497]: E0117 12:07:24.607184 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:24.634175 containerd[1461]: time="2025-01-17T12:07:24.634076143Z" level=info msg="shim disconnected" id=b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f namespace=k8s.io Jan 17 12:07:24.634175 containerd[1461]: time="2025-01-17T12:07:24.634171693Z" level=warning msg="cleaning up after shim disconnected" id=b13c271d4c9fa10b8553d835533e441a98fd61284aa3c67e49a18333213ede3f namespace=k8s.io Jan 17 12:07:24.634175 containerd[1461]: time="2025-01-17T12:07:24.634185429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:07:25.478437 kubelet[2497]: E0117 12:07:25.478358 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:25.538920 kubelet[2497]: E0117 12:07:25.538875 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:25.540843 containerd[1461]: time="2025-01-17T12:07:25.540694339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:07:27.478735 kubelet[2497]: E0117 12:07:27.478596 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:29.478446 kubelet[2497]: E0117 12:07:29.478359 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:31.478603 kubelet[2497]: E0117 12:07:31.478501 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:32.860990 containerd[1461]: time="2025-01-17T12:07:32.860909602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:32.862002 containerd[1461]: time="2025-01-17T12:07:32.861876670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:07:32.863800 containerd[1461]: time="2025-01-17T12:07:32.863697023Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:32.867955 containerd[1461]: time="2025-01-17T12:07:32.867884737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:32.868952 containerd[1461]: time="2025-01-17T12:07:32.868876873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.328125657s" Jan 17 12:07:32.868952 containerd[1461]: time="2025-01-17T12:07:32.868915455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:07:32.872120 containerd[1461]: time="2025-01-17T12:07:32.872066821Z" level=info msg="CreateContainer within sandbox \"ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:07:32.899631 containerd[1461]: time="2025-01-17T12:07:32.899359919Z" level=info msg="CreateContainer within sandbox \"ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61\"" Jan 17 12:07:32.900301 containerd[1461]: time="2025-01-17T12:07:32.900276111Z" level=info msg="StartContainer for \"44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61\"" Jan 17 12:07:32.948690 systemd[1]: Started cri-containerd-44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61.scope - libcontainer container 44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61. Jan 17 12:07:32.988920 containerd[1461]: time="2025-01-17T12:07:32.988855383Z" level=info msg="StartContainer for \"44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61\" returns successfully" Jan 17 12:07:33.558133 kubelet[2497]: E0117 12:07:33.557331 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:33.613265 kubelet[2497]: E0117 12:07:33.613178 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:34.612148 kubelet[2497]: E0117 12:07:34.612075 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:34.957683 systemd[1]: cri-containerd-44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61.scope: Deactivated successfully. Jan 17 12:07:34.984301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61-rootfs.mount: Deactivated successfully. Jan 17 12:07:35.010665 kubelet[2497]: I0117 12:07:35.010613 2497 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 12:07:35.132101 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:41940.service - OpenSSH per-connection server daemon (10.0.0.1:41940). Jan 17 12:07:35.136796 systemd[1]: Created slice kubepods-burstable-pod3401b0dc_0111_43e3_9a45_29e2c23e6b1c.slice - libcontainer container kubepods-burstable-pod3401b0dc_0111_43e3_9a45_29e2c23e6b1c.slice. Jan 17 12:07:35.150105 systemd[1]: Created slice kubepods-burstable-pod6566e330_1ca2_4086_a201_75a74be50141.slice - libcontainer container kubepods-burstable-pod6566e330_1ca2_4086_a201_75a74be50141.slice. Jan 17 12:07:35.156096 systemd[1]: Created slice kubepods-besteffort-pode3d4f78c_7b70_42ea_b36d_e9b2418ba7c2.slice - libcontainer container kubepods-besteffort-pode3d4f78c_7b70_42ea_b36d_e9b2418ba7c2.slice. Jan 17 12:07:35.161730 systemd[1]: Created slice kubepods-besteffort-pod20ef2c4d_046f_4d6c_ad61_7a9796d385c0.slice - libcontainer container kubepods-besteffort-pod20ef2c4d_046f_4d6c_ad61_7a9796d385c0.slice. Jan 17 12:07:35.171321 systemd[1]: Created slice kubepods-besteffort-pod3cf03d78_f4b6_4259_9d2e_ac6c275e4392.slice - libcontainer container kubepods-besteffort-pod3cf03d78_f4b6_4259_9d2e_ac6c275e4392.slice. Jan 17 12:07:35.199461 sshd[3324]: Accepted publickey for core from 10.0.0.1 port 41940 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:07:35.201446 sshd[3324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:35.207414 systemd-logind[1446]: New session 8 of user core. Jan 17 12:07:35.216236 kubelet[2497]: I0117 12:07:35.216131 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qx7w\" (UniqueName: \"kubernetes.io/projected/20ef2c4d-046f-4d6c-ad61-7a9796d385c0-kube-api-access-5qx7w\") pod \"calico-apiserver-9d5f4547c-bmklj\" (UID: \"20ef2c4d-046f-4d6c-ad61-7a9796d385c0\") " pod="calico-apiserver/calico-apiserver-9d5f4547c-bmklj" Jan 17 12:07:35.216236 kubelet[2497]: I0117 12:07:35.216170 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf48m\" (UniqueName: \"kubernetes.io/projected/3401b0dc-0111-43e3-9a45-29e2c23e6b1c-kube-api-access-pf48m\") pod \"coredns-6f6b679f8f-bbv8n\" (UID: \"3401b0dc-0111-43e3-9a45-29e2c23e6b1c\") " pod="kube-system/coredns-6f6b679f8f-bbv8n" Jan 17 12:07:35.216236 kubelet[2497]: I0117 12:07:35.216192 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xdf\" (UniqueName: \"kubernetes.io/projected/3cf03d78-f4b6-4259-9d2e-ac6c275e4392-kube-api-access-x7xdf\") pod \"calico-apiserver-9d5f4547c-shv4q\" (UID: \"3cf03d78-f4b6-4259-9d2e-ac6c275e4392\") " pod="calico-apiserver/calico-apiserver-9d5f4547c-shv4q" Jan 17 12:07:35.216236 kubelet[2497]: I0117 12:07:35.216217 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9s8\" (UniqueName: \"kubernetes.io/projected/6566e330-1ca2-4086-a201-75a74be50141-kube-api-access-4r9s8\") pod \"coredns-6f6b679f8f-d5h6c\" (UID: \"6566e330-1ca2-4086-a201-75a74be50141\") " pod="kube-system/coredns-6f6b679f8f-d5h6c" Jan 17 12:07:35.216386 kubelet[2497]: I0117 12:07:35.216244 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20ef2c4d-046f-4d6c-ad61-7a9796d385c0-calico-apiserver-certs\") pod \"calico-apiserver-9d5f4547c-bmklj\" (UID: \"20ef2c4d-046f-4d6c-ad61-7a9796d385c0\") " pod="calico-apiserver/calico-apiserver-9d5f4547c-bmklj" Jan 17 12:07:35.216386 kubelet[2497]: I0117 12:07:35.216268 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2-tigera-ca-bundle\") pod \"calico-kube-controllers-5d7dd95494-9j7cw\" (UID: \"e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2\") " pod="calico-system/calico-kube-controllers-5d7dd95494-9j7cw" Jan 17 12:07:35.216386 kubelet[2497]: I0117 12:07:35.216290 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5bjj\" (UniqueName: \"kubernetes.io/projected/e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2-kube-api-access-r5bjj\") pod \"calico-kube-controllers-5d7dd95494-9j7cw\" (UID: \"e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2\") " pod="calico-system/calico-kube-controllers-5d7dd95494-9j7cw" Jan 17 12:07:35.216386 kubelet[2497]: I0117 12:07:35.216314 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3401b0dc-0111-43e3-9a45-29e2c23e6b1c-config-volume\") pod \"coredns-6f6b679f8f-bbv8n\" (UID: \"3401b0dc-0111-43e3-9a45-29e2c23e6b1c\") " pod="kube-system/coredns-6f6b679f8f-bbv8n" Jan 17 12:07:35.216386 kubelet[2497]: I0117 12:07:35.216338 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6566e330-1ca2-4086-a201-75a74be50141-config-volume\") pod \"coredns-6f6b679f8f-d5h6c\" (UID: \"6566e330-1ca2-4086-a201-75a74be50141\") " pod="kube-system/coredns-6f6b679f8f-d5h6c" Jan 17 12:07:35.216508 kubelet[2497]: I0117 12:07:35.216353 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cf03d78-f4b6-4259-9d2e-ac6c275e4392-calico-apiserver-certs\") pod \"calico-apiserver-9d5f4547c-shv4q\" (UID: \"3cf03d78-f4b6-4259-9d2e-ac6c275e4392\") " pod="calico-apiserver/calico-apiserver-9d5f4547c-shv4q" Jan 17 12:07:35.218668 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:07:35.261469 containerd[1461]: time="2025-01-17T12:07:35.261371682Z" level=info msg="shim disconnected" id=44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61 namespace=k8s.io Jan 17 12:07:35.261469 containerd[1461]: time="2025-01-17T12:07:35.261447063Z" level=warning msg="cleaning up after shim disconnected" id=44f2ef5893ad590b23f3fee871afc8b7710bb10ec92ddfec06582abf6f96df61 namespace=k8s.io Jan 17 12:07:35.261469 containerd[1461]: time="2025-01-17T12:07:35.261460047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:07:35.366075 sshd[3324]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:35.370182 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:41940.service: Deactivated successfully. Jan 17 12:07:35.372305 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:07:35.373180 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:07:35.374226 systemd-logind[1446]: Removed session 8. Jan 17 12:07:35.442942 kubelet[2497]: E0117 12:07:35.442900 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:35.443717 containerd[1461]: time="2025-01-17T12:07:35.443670794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbv8n,Uid:3401b0dc-0111-43e3-9a45-29e2c23e6b1c,Namespace:kube-system,Attempt:0,}" Jan 17 12:07:35.453357 kubelet[2497]: E0117 12:07:35.453324 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:35.453896 containerd[1461]: time="2025-01-17T12:07:35.453849894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d5h6c,Uid:6566e330-1ca2-4086-a201-75a74be50141,Namespace:kube-system,Attempt:0,}" Jan 17 12:07:35.459069 containerd[1461]: time="2025-01-17T12:07:35.459025181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d7dd95494-9j7cw,Uid:e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2,Namespace:calico-system,Attempt:0,}" Jan 17 12:07:35.470081 containerd[1461]: time="2025-01-17T12:07:35.469949322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d5f4547c-bmklj,Uid:20ef2c4d-046f-4d6c-ad61-7a9796d385c0,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:07:35.474770 containerd[1461]: time="2025-01-17T12:07:35.474735367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d5f4547c-shv4q,Uid:3cf03d78-f4b6-4259-9d2e-ac6c275e4392,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:07:35.486908 systemd[1]: Created slice kubepods-besteffort-pod72572da8_b025_4046_8056_05fcf0914c02.slice - libcontainer container kubepods-besteffort-pod72572da8_b025_4046_8056_05fcf0914c02.slice. Jan 17 12:07:35.489402 containerd[1461]: time="2025-01-17T12:07:35.489352859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjd54,Uid:72572da8-b025-4046-8056-05fcf0914c02,Namespace:calico-system,Attempt:0,}" Jan 17 12:07:35.584071 kubelet[2497]: E0117 12:07:35.583798 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:35.585368 containerd[1461]: time="2025-01-17T12:07:35.584980551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:07:35.596910 containerd[1461]: time="2025-01-17T12:07:35.596694607Z" level=error msg="Failed to destroy network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.597495 containerd[1461]: time="2025-01-17T12:07:35.597470275Z" level=error msg="encountered an error cleaning up failed sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.597614 containerd[1461]: time="2025-01-17T12:07:35.597592795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbv8n,Uid:3401b0dc-0111-43e3-9a45-29e2c23e6b1c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.598447 kubelet[2497]: E0117 12:07:35.597934 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.598447 kubelet[2497]: E0117 12:07:35.598003 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bbv8n" Jan 17 12:07:35.598447 kubelet[2497]: E0117 12:07:35.598025 2497 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bbv8n" Jan 17 12:07:35.598591 kubelet[2497]: E0117 12:07:35.598412 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bbv8n_kube-system(3401b0dc-0111-43e3-9a45-29e2c23e6b1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bbv8n_kube-system(3401b0dc-0111-43e3-9a45-29e2c23e6b1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bbv8n" podUID="3401b0dc-0111-43e3-9a45-29e2c23e6b1c" Jan 17 12:07:35.612025 containerd[1461]: time="2025-01-17T12:07:35.611961299Z" level=error msg="Failed to destroy network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.612422 containerd[1461]: time="2025-01-17T12:07:35.612381418Z" level=error msg="encountered an error cleaning up failed sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.612478 containerd[1461]: time="2025-01-17T12:07:35.612456078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d7dd95494-9j7cw,Uid:e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.612807 kubelet[2497]: E0117 12:07:35.612769 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.613178 kubelet[2497]: E0117 12:07:35.612832 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d7dd95494-9j7cw" Jan 17 12:07:35.613178 kubelet[2497]: E0117 12:07:35.612852 2497 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d7dd95494-9j7cw" Jan 17 12:07:35.613178 kubelet[2497]: E0117 12:07:35.612903 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d7dd95494-9j7cw_calico-system(e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d7dd95494-9j7cw_calico-system(e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d7dd95494-9j7cw" podUID="e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2" Jan 17 12:07:35.622334 containerd[1461]: time="2025-01-17T12:07:35.622257890Z" level=error msg="Failed to destroy network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.622930 containerd[1461]: time="2025-01-17T12:07:35.622904887Z" level=error msg="encountered an error cleaning up failed sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.623030 containerd[1461]: time="2025-01-17T12:07:35.623010154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d5h6c,Uid:6566e330-1ca2-4086-a201-75a74be50141,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.623352 kubelet[2497]: E0117 12:07:35.623320 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.623477 kubelet[2497]: E0117 12:07:35.623461 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-d5h6c" Jan 17 12:07:35.623626 kubelet[2497]: E0117 12:07:35.623582 2497 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-d5h6c" Jan 17 12:07:35.624043 kubelet[2497]: E0117 12:07:35.623798 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-d5h6c_kube-system(6566e330-1ca2-4086-a201-75a74be50141)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-d5h6c_kube-system(6566e330-1ca2-4086-a201-75a74be50141)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-d5h6c" podUID="6566e330-1ca2-4086-a201-75a74be50141" Jan 17 12:07:35.635517 containerd[1461]: time="2025-01-17T12:07:35.635445625Z" level=error msg="Failed to destroy network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.636459 containerd[1461]: time="2025-01-17T12:07:35.636410969Z" level=error msg="encountered an error cleaning up failed sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.636697 containerd[1461]: time="2025-01-17T12:07:35.636663003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d5f4547c-shv4q,Uid:3cf03d78-f4b6-4259-9d2e-ac6c275e4392,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.637117 kubelet[2497]: E0117 12:07:35.637074 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.637327 kubelet[2497]: E0117 12:07:35.637298 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d5f4547c-shv4q" Jan 17 12:07:35.637444 kubelet[2497]: E0117 12:07:35.637420 2497 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d5f4547c-shv4q" Jan 17 12:07:35.637667 kubelet[2497]: E0117 12:07:35.637579 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d5f4547c-shv4q_calico-apiserver(3cf03d78-f4b6-4259-9d2e-ac6c275e4392)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d5f4547c-shv4q_calico-apiserver(3cf03d78-f4b6-4259-9d2e-ac6c275e4392)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d5f4547c-shv4q" podUID="3cf03d78-f4b6-4259-9d2e-ac6c275e4392" Jan 17 12:07:35.645110 containerd[1461]: time="2025-01-17T12:07:35.645051588Z" level=error msg="Failed to destroy network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.646129 containerd[1461]: time="2025-01-17T12:07:35.646091423Z" level=error msg="encountered an error cleaning up failed sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.646185 containerd[1461]: time="2025-01-17T12:07:35.646157938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d5f4547c-bmklj,Uid:20ef2c4d-046f-4d6c-ad61-7a9796d385c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.646431 kubelet[2497]: E0117 12:07:35.646380 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.646545 kubelet[2497]: E0117 12:07:35.646452 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d5f4547c-bmklj" Jan 17 12:07:35.646545 kubelet[2497]: E0117 12:07:35.646479 2497 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d5f4547c-bmklj" Jan 17 12:07:35.646677 kubelet[2497]: E0117 12:07:35.646636 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d5f4547c-bmklj_calico-apiserver(20ef2c4d-046f-4d6c-ad61-7a9796d385c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d5f4547c-bmklj_calico-apiserver(20ef2c4d-046f-4d6c-ad61-7a9796d385c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d5f4547c-bmklj" podUID="20ef2c4d-046f-4d6c-ad61-7a9796d385c0" Jan 17 12:07:35.658834 containerd[1461]: time="2025-01-17T12:07:35.658758760Z" level=error msg="Failed to destroy network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.659254 containerd[1461]: time="2025-01-17T12:07:35.659219566Z" level=error msg="encountered an error cleaning up failed sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.659313 containerd[1461]: time="2025-01-17T12:07:35.659289497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjd54,Uid:72572da8-b025-4046-8056-05fcf0914c02,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.659557 kubelet[2497]: E0117 12:07:35.659503 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:35.659636 kubelet[2497]: E0117 12:07:35.659577 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pjd54" Jan 17 12:07:35.659636 kubelet[2497]: E0117 12:07:35.659611 2497 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pjd54" Jan 17 12:07:35.659714 kubelet[2497]: E0117 12:07:35.659650 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pjd54_calico-system(72572da8-b025-4046-8056-05fcf0914c02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pjd54_calico-system(72572da8-b025-4046-8056-05fcf0914c02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:36.586706 kubelet[2497]: I0117 12:07:36.586666 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:07:36.587431 containerd[1461]: time="2025-01-17T12:07:36.587384831Z" level=info msg="StopPodSandbox for \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\"" Jan 17 12:07:36.588519 containerd[1461]: time="2025-01-17T12:07:36.587613980Z" level=info msg="Ensure that sandbox 2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e in task-service has been cleanup successfully" Jan 17 12:07:36.588567 kubelet[2497]: I0117 12:07:36.587730 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:07:36.588781 containerd[1461]: time="2025-01-17T12:07:36.588707816Z" level=info msg="StopPodSandbox for \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\"" Jan 17 12:07:36.591596 containerd[1461]: time="2025-01-17T12:07:36.589302073Z" level=info msg="Ensure that sandbox 7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9 in task-service has been cleanup successfully" Jan 17 12:07:36.594938 kubelet[2497]: I0117 12:07:36.594894 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:07:36.595958 containerd[1461]: time="2025-01-17T12:07:36.595899541Z" level=info msg="StopPodSandbox for \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\"" Jan 17 12:07:36.596822 containerd[1461]: time="2025-01-17T12:07:36.596800884Z" level=info msg="Ensure that sandbox 98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0 in task-service has been cleanup successfully" Jan 17 12:07:36.597482 kubelet[2497]: I0117 12:07:36.597464 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:07:36.597959 containerd[1461]: time="2025-01-17T12:07:36.597938473Z" level=info msg="StopPodSandbox for \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\"" Jan 17 12:07:36.598911 containerd[1461]: time="2025-01-17T12:07:36.598870473Z" level=info msg="Ensure that sandbox 8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346 in task-service has been cleanup successfully" Jan 17 12:07:36.599246 kubelet[2497]: I0117 12:07:36.599199 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:07:36.600415 containerd[1461]: time="2025-01-17T12:07:36.600371925Z" level=info msg="StopPodSandbox for \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\"" Jan 17 12:07:36.600607 containerd[1461]: time="2025-01-17T12:07:36.600577341Z" level=info msg="Ensure that sandbox 66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd in task-service has been cleanup successfully" Jan 17 12:07:36.600950 kubelet[2497]: I0117 12:07:36.600927 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:07:36.602133 containerd[1461]: time="2025-01-17T12:07:36.601987801Z" level=info msg="StopPodSandbox for \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\"" Jan 17 12:07:36.604385 containerd[1461]: time="2025-01-17T12:07:36.604205799Z" level=info msg="Ensure that sandbox a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d in task-service has been cleanup successfully" Jan 17 12:07:36.648245 containerd[1461]: time="2025-01-17T12:07:36.648076529Z" level=error msg="StopPodSandbox for \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\" failed" error="failed to destroy network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:36.649113 kubelet[2497]: E0117 12:07:36.649076 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:07:36.650417 kubelet[2497]: E0117 12:07:36.649665 2497 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9"} Jan 17 12:07:36.650417 kubelet[2497]: E0117 12:07:36.649735 2497 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:07:36.650417 kubelet[2497]: E0117 12:07:36.649761 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d7dd95494-9j7cw" podUID="e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2" Jan 17 12:07:36.660583 containerd[1461]: time="2025-01-17T12:07:36.660482652Z" level=error msg="StopPodSandbox for \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\" failed" error="failed to destroy network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:36.661129 kubelet[2497]: E0117 12:07:36.661072 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:07:36.661205 kubelet[2497]: E0117 12:07:36.661143 2497 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e"} Jan 17 12:07:36.661205 kubelet[2497]: E0117 12:07:36.661178 2497 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3401b0dc-0111-43e3-9a45-29e2c23e6b1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:07:36.661323 kubelet[2497]: E0117 12:07:36.661205 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3401b0dc-0111-43e3-9a45-29e2c23e6b1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bbv8n" podUID="3401b0dc-0111-43e3-9a45-29e2c23e6b1c" Jan 17 12:07:36.663160 containerd[1461]: time="2025-01-17T12:07:36.663097115Z" level=error msg="StopPodSandbox for \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\" failed" error="failed to destroy network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:36.663668 kubelet[2497]: E0117 12:07:36.663576 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:07:36.663668 kubelet[2497]: E0117 12:07:36.663602 2497 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0"} Jan 17 12:07:36.663668 kubelet[2497]: E0117 12:07:36.663623 2497 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6566e330-1ca2-4086-a201-75a74be50141\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:07:36.663668 kubelet[2497]: E0117 12:07:36.663639 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6566e330-1ca2-4086-a201-75a74be50141\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-d5h6c" podUID="6566e330-1ca2-4086-a201-75a74be50141" Jan 17 12:07:36.665449 containerd[1461]: time="2025-01-17T12:07:36.665395203Z" level=error msg="StopPodSandbox for \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\" failed" error="failed to destroy network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:36.665600 containerd[1461]: time="2025-01-17T12:07:36.665533884Z" level=error msg="StopPodSandbox for \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\" failed" error="failed to destroy network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:36.665808 kubelet[2497]: E0117 12:07:36.665779 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:07:36.665874 kubelet[2497]: E0117 12:07:36.665811 2497 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd"} Jan 17 12:07:36.665874 kubelet[2497]: E0117 12:07:36.665794 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:07:36.665874 kubelet[2497]: E0117 12:07:36.665844 2497 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cf03d78-f4b6-4259-9d2e-ac6c275e4392\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:07:36.665874 kubelet[2497]: E0117 12:07:36.665864 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cf03d78-f4b6-4259-9d2e-ac6c275e4392\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d5f4547c-shv4q" podUID="3cf03d78-f4b6-4259-9d2e-ac6c275e4392" Jan 17 12:07:36.666015 kubelet[2497]: E0117 12:07:36.665872 2497 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d"} Jan 17 12:07:36.666015 kubelet[2497]: E0117 12:07:36.665919 2497 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20ef2c4d-046f-4d6c-ad61-7a9796d385c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:07:36.666015 kubelet[2497]: E0117 12:07:36.665955 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20ef2c4d-046f-4d6c-ad61-7a9796d385c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d5f4547c-bmklj" podUID="20ef2c4d-046f-4d6c-ad61-7a9796d385c0" Jan 17 12:07:36.673397 containerd[1461]: time="2025-01-17T12:07:36.673310307Z" level=error msg="StopPodSandbox for \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\" failed" error="failed to destroy network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:07:36.673666 kubelet[2497]: E0117 12:07:36.673622 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:07:36.673740 kubelet[2497]: E0117 12:07:36.673673 2497 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346"} Jan 17 12:07:36.673740 kubelet[2497]: E0117 12:07:36.673712 2497 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72572da8-b025-4046-8056-05fcf0914c02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:07:36.673740 kubelet[2497]: E0117 12:07:36.673733 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72572da8-b025-4046-8056-05fcf0914c02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pjd54" podUID="72572da8-b025-4046-8056-05fcf0914c02" Jan 17 12:07:40.382300 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:49942.service - OpenSSH per-connection server daemon (10.0.0.1:49942). Jan 17 12:07:40.466765 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 49942 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:07:40.468646 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:40.483162 systemd-logind[1446]: New session 9 of user core. Jan 17 12:07:40.487795 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:07:41.034943 sshd[3720]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:41.040152 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:49942.service: Deactivated successfully. Jan 17 12:07:41.043097 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:07:41.044548 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:07:41.046680 systemd-logind[1446]: Removed session 9. Jan 17 12:07:43.117243 kubelet[2497]: I0117 12:07:43.117172 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:07:43.118154 kubelet[2497]: E0117 12:07:43.117645 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:43.265331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933757194.mount: Deactivated successfully. Jan 17 12:07:43.614058 kubelet[2497]: E0117 12:07:43.614009 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:45.413710 containerd[1461]: time="2025-01-17T12:07:45.413637043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:45.421169 containerd[1461]: time="2025-01-17T12:07:45.421067022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:07:45.427630 containerd[1461]: time="2025-01-17T12:07:45.427423326Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:45.433943 containerd[1461]: time="2025-01-17T12:07:45.433852727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:45.434655 containerd[1461]: time="2025-01-17T12:07:45.434557640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.849524791s" Jan 17 12:07:45.434655 containerd[1461]: time="2025-01-17T12:07:45.434654382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:07:45.449814 containerd[1461]: time="2025-01-17T12:07:45.449713987Z" level=info msg="CreateContainer within sandbox \"ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:07:45.945460 containerd[1461]: time="2025-01-17T12:07:45.945398333Z" level=info msg="CreateContainer within sandbox \"ff2c65ab735a4c74d220495c18f0910ff8bfe1df25a2e294e3207a1b7fd63238\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ef707ea0f1ee5cc678cc7b45ac07e30fd333a7cf4bb70a2968cea717236cfde1\"" Jan 17 12:07:45.946091 containerd[1461]: time="2025-01-17T12:07:45.946049254Z" level=info msg="StartContainer for \"ef707ea0f1ee5cc678cc7b45ac07e30fd333a7cf4bb70a2968cea717236cfde1\"" Jan 17 12:07:46.060834 systemd[1]: Started cri-containerd-ef707ea0f1ee5cc678cc7b45ac07e30fd333a7cf4bb70a2968cea717236cfde1.scope - libcontainer container ef707ea0f1ee5cc678cc7b45ac07e30fd333a7cf4bb70a2968cea717236cfde1. Jan 17 12:07:46.062848 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:49944.service - OpenSSH per-connection server daemon (10.0.0.1:49944). Jan 17 12:07:46.102073 containerd[1461]: time="2025-01-17T12:07:46.102018952Z" level=info msg="StartContainer for \"ef707ea0f1ee5cc678cc7b45ac07e30fd333a7cf4bb70a2968cea717236cfde1\" returns successfully" Jan 17 12:07:46.115484 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 49944 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:07:46.117783 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:46.123294 systemd-logind[1446]: New session 10 of user core. Jan 17 12:07:46.129703 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:07:46.182811 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:07:46.182991 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:07:46.303851 sshd[3760]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:46.308513 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:49944.service: Deactivated successfully. Jan 17 12:07:46.312021 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:07:46.313215 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:07:46.316510 systemd-logind[1446]: Removed session 10. Jan 17 12:07:46.623923 kubelet[2497]: E0117 12:07:46.623858 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:46.641155 kubelet[2497]: I0117 12:07:46.641062 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ctg9m" podStartSLOduration=2.274633223 podStartE2EDuration="29.641036752s" podCreationTimestamp="2025-01-17 12:07:17 +0000 UTC" firstStartedPulling="2025-01-17 12:07:18.069259738 +0000 UTC m=+13.685551875" lastFinishedPulling="2025-01-17 12:07:45.435663277 +0000 UTC m=+41.051955404" observedRunningTime="2025-01-17 12:07:46.640673441 +0000 UTC m=+42.256965588" watchObservedRunningTime="2025-01-17 12:07:46.641036752 +0000 UTC m=+42.257328899" Jan 17 12:07:47.625645 kubelet[2497]: E0117 12:07:47.625582 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:47.739583 kernel: bpftool[4000]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:07:48.015043 systemd-networkd[1392]: vxlan.calico: Link UP Jan 17 12:07:48.015704 systemd-networkd[1392]: vxlan.calico: Gained carrier Jan 17 12:07:48.479653 containerd[1461]: time="2025-01-17T12:07:48.479599488Z" level=info msg="StopPodSandbox for \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\"" Jan 17 12:07:48.481586 containerd[1461]: time="2025-01-17T12:07:48.479899992Z" level=info msg="StopPodSandbox for \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\"" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:48.799 [INFO][4099] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:48.800 [INFO][4099] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" iface="eth0" netns="/var/run/netns/cni-04d843ac-0121-df21-ed19-a532d90b61b3" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:48.800 [INFO][4099] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" iface="eth0" netns="/var/run/netns/cni-04d843ac-0121-df21-ed19-a532d90b61b3" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:48.805 [INFO][4099] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" iface="eth0" netns="/var/run/netns/cni-04d843ac-0121-df21-ed19-a532d90b61b3" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:48.805 [INFO][4099] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:48.805 [INFO][4099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:49.001 [INFO][4120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:49.002 [INFO][4120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:49.002 [INFO][4120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:49.009 [WARNING][4120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:49.009 [INFO][4120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:49.011 [INFO][4120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:49.018353 containerd[1461]: 2025-01-17 12:07:49.014 [INFO][4099] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:07:49.018954 containerd[1461]: time="2025-01-17T12:07:49.018708530Z" level=info msg="TearDown network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\" successfully" Jan 17 12:07:49.018954 containerd[1461]: time="2025-01-17T12:07:49.018747242Z" level=info msg="StopPodSandbox for \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\" returns successfully" Jan 17 12:07:49.021203 kubelet[2497]: E0117 12:07:49.019170 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:49.021792 containerd[1461]: time="2025-01-17T12:07:49.019857887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbv8n,Uid:3401b0dc-0111-43e3-9a45-29e2c23e6b1c,Namespace:kube-system,Attempt:1,}" Jan 17 12:07:49.022013 systemd[1]: run-netns-cni\x2d04d843ac\x2d0121\x2ddf21\x2ded19\x2da532d90b61b3.mount: Deactivated successfully. Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:48.799 [INFO][4108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:48.800 [INFO][4108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" iface="eth0" netns="/var/run/netns/cni-bf26e6d3-0ec0-5859-fe68-c2c996667f38" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:48.800 [INFO][4108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" iface="eth0" netns="/var/run/netns/cni-bf26e6d3-0ec0-5859-fe68-c2c996667f38" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:48.805 [INFO][4108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" iface="eth0" netns="/var/run/netns/cni-bf26e6d3-0ec0-5859-fe68-c2c996667f38" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:48.805 [INFO][4108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:48.805 [INFO][4108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:49.001 [INFO][4119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:49.002 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:49.011 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:49.016 [WARNING][4119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:49.016 [INFO][4119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:49.018 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:49.026478 containerd[1461]: 2025-01-17 12:07:49.023 [INFO][4108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:07:49.026940 containerd[1461]: time="2025-01-17T12:07:49.026697284Z" level=info msg="TearDown network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\" successfully" Jan 17 12:07:49.026940 containerd[1461]: time="2025-01-17T12:07:49.026731408Z" level=info msg="StopPodSandbox for \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\" returns successfully" Jan 17 12:07:49.027781 containerd[1461]: time="2025-01-17T12:07:49.027748027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d5f4547c-shv4q,Uid:3cf03d78-f4b6-4259-9d2e-ac6c275e4392,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:07:49.029712 systemd[1]: run-netns-cni\x2dbf26e6d3\x2d0ec0\x2d5859\x2dfe68\x2dc2c996667f38.mount: Deactivated successfully. Jan 17 12:07:49.343082 systemd-networkd[1392]: calidcb313a6660: Link UP Jan 17 12:07:49.344464 systemd-networkd[1392]: calidcb313a6660: Gained carrier Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.253 [INFO][4135] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0 coredns-6f6b679f8f- kube-system 3401b0dc-0111-43e3-9a45-29e2c23e6b1c 847 0 2025-01-17 12:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-bbv8n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidcb313a6660 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Namespace="kube-system" Pod="coredns-6f6b679f8f-bbv8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bbv8n-" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.253 [INFO][4135] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Namespace="kube-system" Pod="coredns-6f6b679f8f-bbv8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.289 [INFO][4160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" HandleID="k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.301 [INFO][4160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" HandleID="k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-bbv8n", "timestamp":"2025-01-17 12:07:49.28900801 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.301 [INFO][4160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.301 [INFO][4160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.301 [INFO][4160] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.307 [INFO][4160] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.314 [INFO][4160] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.319 [INFO][4160] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.321 [INFO][4160] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.323 [INFO][4160] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.323 [INFO][4160] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.325 [INFO][4160] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891 Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.329 [INFO][4160] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.335 [INFO][4160] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.336 [INFO][4160] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" host="localhost" Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.336 [INFO][4160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:49.361017 containerd[1461]: 2025-01-17 12:07:49.336 [INFO][4160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" HandleID="k8s-pod-network.0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.361865 containerd[1461]: 2025-01-17 12:07:49.339 [INFO][4135] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Namespace="kube-system" Pod="coredns-6f6b679f8f-bbv8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3401b0dc-0111-43e3-9a45-29e2c23e6b1c", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-bbv8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcb313a6660", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:49.361865 containerd[1461]: 2025-01-17 12:07:49.340 [INFO][4135] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Namespace="kube-system" Pod="coredns-6f6b679f8f-bbv8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.361865 containerd[1461]: 2025-01-17 12:07:49.340 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcb313a6660 ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Namespace="kube-system" Pod="coredns-6f6b679f8f-bbv8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.361865 containerd[1461]: 2025-01-17 12:07:49.343 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Namespace="kube-system" Pod="coredns-6f6b679f8f-bbv8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.361865 containerd[1461]: 2025-01-17 12:07:49.345 [INFO][4135] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Namespace="kube-system" Pod="coredns-6f6b679f8f-bbv8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3401b0dc-0111-43e3-9a45-29e2c23e6b1c", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891", Pod:"coredns-6f6b679f8f-bbv8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcb313a6660", MAC:"e2:1f:c5:54:8f:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:49.361865 containerd[1461]: 2025-01-17 12:07:49.356 [INFO][4135] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891" Namespace="kube-system" Pod="coredns-6f6b679f8f-bbv8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:07:49.404237 containerd[1461]: time="2025-01-17T12:07:49.404113873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:49.404237 containerd[1461]: time="2025-01-17T12:07:49.404183113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:49.404237 containerd[1461]: time="2025-01-17T12:07:49.404195336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:49.404548 containerd[1461]: time="2025-01-17T12:07:49.404288702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:49.426833 systemd[1]: Started cri-containerd-0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891.scope - libcontainer container 0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891. Jan 17 12:07:49.441881 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:07:49.454095 systemd-networkd[1392]: cali189d70d9487: Link UP Jan 17 12:07:49.454753 systemd-networkd[1392]: cali189d70d9487: Gained carrier Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.260 [INFO][4146] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0 calico-apiserver-9d5f4547c- calico-apiserver 3cf03d78-f4b6-4259-9d2e-ac6c275e4392 846 0 2025-01-17 12:07:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d5f4547c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9d5f4547c-shv4q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali189d70d9487 [] []}} ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-shv4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.261 [INFO][4146] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-shv4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.301 [INFO][4166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" HandleID="k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.310 [INFO][4166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" HandleID="k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000393530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9d5f4547c-shv4q", "timestamp":"2025-01-17 12:07:49.301199904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.310 [INFO][4166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.336 [INFO][4166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.336 [INFO][4166] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.410 [INFO][4166] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.414 [INFO][4166] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.419 [INFO][4166] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.421 [INFO][4166] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.423 [INFO][4166] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.423 [INFO][4166] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.424 [INFO][4166] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8 Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.429 [INFO][4166] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.444 [INFO][4166] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.444 [INFO][4166] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" host="localhost" Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.444 [INFO][4166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:49.475888 containerd[1461]: 2025-01-17 12:07:49.444 [INFO][4166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" HandleID="k8s-pod-network.a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.477036 containerd[1461]: 2025-01-17 12:07:49.451 [INFO][4146] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-shv4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0", GenerateName:"calico-apiserver-9d5f4547c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cf03d78-f4b6-4259-9d2e-ac6c275e4392", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d5f4547c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9d5f4547c-shv4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali189d70d9487", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:49.477036 containerd[1461]: 2025-01-17 12:07:49.451 [INFO][4146] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-shv4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.477036 containerd[1461]: 2025-01-17 12:07:49.451 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali189d70d9487 ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-shv4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.477036 containerd[1461]: 2025-01-17 12:07:49.455 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-shv4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.477036 containerd[1461]: 2025-01-17 12:07:49.455 [INFO][4146] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-shv4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0", GenerateName:"calico-apiserver-9d5f4547c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cf03d78-f4b6-4259-9d2e-ac6c275e4392", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d5f4547c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8", Pod:"calico-apiserver-9d5f4547c-shv4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali189d70d9487", MAC:"1e:af:72:4e:1a:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:49.477036 containerd[1461]: 2025-01-17 12:07:49.470 [INFO][4146] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-shv4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:07:49.480217 containerd[1461]: time="2025-01-17T12:07:49.480152367Z" level=info msg="StopPodSandbox for \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\"" Jan 17 12:07:49.488348 containerd[1461]: time="2025-01-17T12:07:49.488300531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbv8n,Uid:3401b0dc-0111-43e3-9a45-29e2c23e6b1c,Namespace:kube-system,Attempt:1,} returns sandbox id \"0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891\"" Jan 17 12:07:49.489402 kubelet[2497]: E0117 12:07:49.489375 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:49.495426 containerd[1461]: time="2025-01-17T12:07:49.495285352Z" level=info msg="CreateContainer within sandbox \"0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:07:49.514564 containerd[1461]: time="2025-01-17T12:07:49.511942577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:49.514564 containerd[1461]: time="2025-01-17T12:07:49.512021174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:49.514564 containerd[1461]: time="2025-01-17T12:07:49.512049187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:49.514564 containerd[1461]: time="2025-01-17T12:07:49.512187907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:49.539111 systemd[1]: Started cri-containerd-a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8.scope - libcontainer container a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8. Jan 17 12:07:49.558994 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:07:49.591692 containerd[1461]: time="2025-01-17T12:07:49.591640920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d5f4547c-shv4q,Uid:3cf03d78-f4b6-4259-9d2e-ac6c275e4392,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8\"" Jan 17 12:07:49.601231 containerd[1461]: time="2025-01-17T12:07:49.593360488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:07:49.627452 containerd[1461]: time="2025-01-17T12:07:49.627330028Z" level=info msg="CreateContainer within sandbox \"0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da072bb38f06872d32169cf9cb559ab4dd909e8f40dc5b37e83c863c4f1e93ec\"" Jan 17 12:07:49.632178 containerd[1461]: time="2025-01-17T12:07:49.630543801Z" level=info msg="StartContainer for \"da072bb38f06872d32169cf9cb559ab4dd909e8f40dc5b37e83c863c4f1e93ec\"" Jan 17 12:07:49.661666 systemd[1]: Started cri-containerd-da072bb38f06872d32169cf9cb559ab4dd909e8f40dc5b37e83c863c4f1e93ec.scope - libcontainer container da072bb38f06872d32169cf9cb559ab4dd909e8f40dc5b37e83c863c4f1e93ec. Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.645 [INFO][4264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.645 [INFO][4264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" iface="eth0" netns="/var/run/netns/cni-6e12a95e-bda9-cf80-c8c5-ba7891369d47" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.646 [INFO][4264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" iface="eth0" netns="/var/run/netns/cni-6e12a95e-bda9-cf80-c8c5-ba7891369d47" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.646 [INFO][4264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" iface="eth0" netns="/var/run/netns/cni-6e12a95e-bda9-cf80-c8c5-ba7891369d47" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.646 [INFO][4264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.646 [INFO][4264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.678 [INFO][4324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.678 [INFO][4324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.678 [INFO][4324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.684 [WARNING][4324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.684 [INFO][4324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.685 [INFO][4324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:49.692206 containerd[1461]: 2025-01-17 12:07:49.688 [INFO][4264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:07:49.693090 containerd[1461]: time="2025-01-17T12:07:49.692956920Z" level=info msg="TearDown network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\" successfully" Jan 17 12:07:49.693090 containerd[1461]: time="2025-01-17T12:07:49.692999981Z" level=info msg="StopPodSandbox for \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\" returns successfully" Jan 17 12:07:49.693647 containerd[1461]: time="2025-01-17T12:07:49.693624753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d7dd95494-9j7cw,Uid:e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2,Namespace:calico-system,Attempt:1,}" Jan 17 12:07:49.704343 containerd[1461]: time="2025-01-17T12:07:49.704303688Z" level=info msg="StartContainer for \"da072bb38f06872d32169cf9cb559ab4dd909e8f40dc5b37e83c863c4f1e93ec\" returns successfully" Jan 17 12:07:49.971222 systemd-networkd[1392]: cali777b6961282: Link UP Jan 17 12:07:49.973246 systemd-networkd[1392]: cali777b6961282: Gained carrier Jan 17 12:07:50.026332 systemd[1]: run-netns-cni\x2d6e12a95e\x2dbda9\x2dcf80\x2dc8c5\x2dba7891369d47.mount: Deactivated successfully. Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.890 [INFO][4349] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0 calico-kube-controllers-5d7dd95494- calico-system e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2 868 0 2025-01-17 12:07:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d7dd95494 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d7dd95494-9j7cw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali777b6961282 [] []}} ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Namespace="calico-system" Pod="calico-kube-controllers-5d7dd95494-9j7cw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.890 [INFO][4349] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Namespace="calico-system" Pod="calico-kube-controllers-5d7dd95494-9j7cw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.921 [INFO][4363] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" HandleID="k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.930 [INFO][4363] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" HandleID="k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d7dd95494-9j7cw", "timestamp":"2025-01-17 12:07:49.921700791 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.930 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.930 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.930 [INFO][4363] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.933 [INFO][4363] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.937 [INFO][4363] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.941 [INFO][4363] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.943 [INFO][4363] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.945 [INFO][4363] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.945 [INFO][4363] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.947 [INFO][4363] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.956 [INFO][4363] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.964 [INFO][4363] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.964 [INFO][4363] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" host="localhost" Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.964 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:50.031915 containerd[1461]: 2025-01-17 12:07:49.964 [INFO][4363] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" HandleID="k8s-pod-network.a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:50.033330 containerd[1461]: 2025-01-17 12:07:49.967 [INFO][4349] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Namespace="calico-system" Pod="calico-kube-controllers-5d7dd95494-9j7cw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0", GenerateName:"calico-kube-controllers-5d7dd95494-", Namespace:"calico-system", SelfLink:"", UID:"e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d7dd95494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d7dd95494-9j7cw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali777b6961282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:50.033330 containerd[1461]: 2025-01-17 12:07:49.967 [INFO][4349] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Namespace="calico-system" Pod="calico-kube-controllers-5d7dd95494-9j7cw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:50.033330 containerd[1461]: 2025-01-17 12:07:49.968 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali777b6961282 ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Namespace="calico-system" Pod="calico-kube-controllers-5d7dd95494-9j7cw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:50.033330 containerd[1461]: 2025-01-17 12:07:49.971 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Namespace="calico-system" Pod="calico-kube-controllers-5d7dd95494-9j7cw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:50.033330 containerd[1461]: 2025-01-17 12:07:49.973 [INFO][4349] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Namespace="calico-system" Pod="calico-kube-controllers-5d7dd95494-9j7cw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0", GenerateName:"calico-kube-controllers-5d7dd95494-", Namespace:"calico-system", SelfLink:"", UID:"e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d7dd95494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd", Pod:"calico-kube-controllers-5d7dd95494-9j7cw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali777b6961282", MAC:"16:ad:1b:7b:96:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:50.033330 containerd[1461]: 2025-01-17 12:07:50.028 [INFO][4349] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd" Namespace="calico-system" Pod="calico-kube-controllers-5d7dd95494-9j7cw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:07:50.047712 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Jan 17 12:07:50.059561 containerd[1461]: time="2025-01-17T12:07:50.058373529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:50.059561 containerd[1461]: time="2025-01-17T12:07:50.058449101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:50.059561 containerd[1461]: time="2025-01-17T12:07:50.058461254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:50.059561 containerd[1461]: time="2025-01-17T12:07:50.058578183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:50.087869 systemd[1]: Started cri-containerd-a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd.scope - libcontainer container a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd. Jan 17 12:07:50.108357 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:07:50.148563 containerd[1461]: time="2025-01-17T12:07:50.147395334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d7dd95494-9j7cw,Uid:e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd\"" Jan 17 12:07:50.479922 containerd[1461]: time="2025-01-17T12:07:50.479788058Z" level=info msg="StopPodSandbox for \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\"" Jan 17 12:07:50.663124 kubelet[2497]: E0117 12:07:50.663086 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:50.788766 kubelet[2497]: I0117 12:07:50.788351 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bbv8n" podStartSLOduration=40.788331701 podStartE2EDuration="40.788331701s" podCreationTimestamp="2025-01-17 12:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:50.787984369 +0000 UTC m=+46.404276506" watchObservedRunningTime="2025-01-17 12:07:50.788331701 +0000 UTC m=+46.404623838" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.648 [INFO][4451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.648 [INFO][4451] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" iface="eth0" netns="/var/run/netns/cni-7a99fb15-fabb-0479-97bc-9daea947b45f" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.649 [INFO][4451] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" iface="eth0" netns="/var/run/netns/cni-7a99fb15-fabb-0479-97bc-9daea947b45f" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.649 [INFO][4451] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" iface="eth0" netns="/var/run/netns/cni-7a99fb15-fabb-0479-97bc-9daea947b45f" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.649 [INFO][4451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.649 [INFO][4451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.700 [INFO][4458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.700 [INFO][4458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.700 [INFO][4458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.781 [WARNING][4458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.781 [INFO][4458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.784 [INFO][4458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:50.792743 containerd[1461]: 2025-01-17 12:07:50.787 [INFO][4451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:07:50.792743 containerd[1461]: time="2025-01-17T12:07:50.791178634Z" level=info msg="TearDown network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\" successfully" Jan 17 12:07:50.792743 containerd[1461]: time="2025-01-17T12:07:50.791214462Z" level=info msg="StopPodSandbox for \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\" returns successfully" Jan 17 12:07:50.794023 containerd[1461]: time="2025-01-17T12:07:50.793965715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d5f4547c-bmklj,Uid:20ef2c4d-046f-4d6c-ad61-7a9796d385c0,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:07:50.802109 systemd[1]: run-netns-cni\x2d7a99fb15\x2dfabb\x2d0479\x2d97bc\x2d9daea947b45f.mount: Deactivated successfully. Jan 17 12:07:50.943701 systemd-networkd[1392]: calidcb313a6660: Gained IPv6LL Jan 17 12:07:50.961635 systemd-networkd[1392]: cali5950463bfad: Link UP Jan 17 12:07:50.962584 systemd-networkd[1392]: cali5950463bfad: Gained carrier Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.889 [INFO][4469] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0 calico-apiserver-9d5f4547c- calico-apiserver 20ef2c4d-046f-4d6c-ad61-7a9796d385c0 879 0 2025-01-17 12:07:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d5f4547c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9d5f4547c-bmklj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5950463bfad [] []}} ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-bmklj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.889 [INFO][4469] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-bmklj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.918 [INFO][4485] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" HandleID="k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.925 [INFO][4485] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" HandleID="k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9d5f4547c-bmklj", "timestamp":"2025-01-17 12:07:50.918655214 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.926 [INFO][4485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.926 [INFO][4485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.926 [INFO][4485] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.927 [INFO][4485] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.931 [INFO][4485] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.935 [INFO][4485] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.936 [INFO][4485] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.938 [INFO][4485] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.938 [INFO][4485] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.939 [INFO][4485] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7 Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.948 [INFO][4485] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.956 [INFO][4485] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.956 [INFO][4485] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" host="localhost" Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.956 [INFO][4485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:50.979307 containerd[1461]: 2025-01-17 12:07:50.956 [INFO][4485] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" HandleID="k8s-pod-network.703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.979970 containerd[1461]: 2025-01-17 12:07:50.959 [INFO][4469] cni-plugin/k8s.go 386: Populated endpoint ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-bmklj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0", GenerateName:"calico-apiserver-9d5f4547c-", Namespace:"calico-apiserver", SelfLink:"", UID:"20ef2c4d-046f-4d6c-ad61-7a9796d385c0", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d5f4547c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9d5f4547c-bmklj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5950463bfad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:50.979970 containerd[1461]: 2025-01-17 12:07:50.959 [INFO][4469] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-bmklj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.979970 containerd[1461]: 2025-01-17 12:07:50.959 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5950463bfad ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-bmklj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.979970 containerd[1461]: 2025-01-17 12:07:50.962 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-bmklj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:50.979970 containerd[1461]: 2025-01-17 12:07:50.962 [INFO][4469] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-bmklj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0", GenerateName:"calico-apiserver-9d5f4547c-", Namespace:"calico-apiserver", SelfLink:"", UID:"20ef2c4d-046f-4d6c-ad61-7a9796d385c0", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d5f4547c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7", Pod:"calico-apiserver-9d5f4547c-bmklj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5950463bfad", MAC:"9a:6f:04:6a:d8:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:50.979970 containerd[1461]: 2025-01-17 12:07:50.974 [INFO][4469] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7" Namespace="calico-apiserver" Pod="calico-apiserver-9d5f4547c-bmklj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:07:51.000803 containerd[1461]: time="2025-01-17T12:07:51.000676609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:51.000803 containerd[1461]: time="2025-01-17T12:07:51.000769685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:51.000985 containerd[1461]: time="2025-01-17T12:07:51.000792367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:51.000985 containerd[1461]: time="2025-01-17T12:07:51.000926228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:51.023774 systemd[1]: Started cri-containerd-703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7.scope - libcontainer container 703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7. Jan 17 12:07:51.038067 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:07:51.065668 containerd[1461]: time="2025-01-17T12:07:51.065592654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d5f4547c-bmklj,Uid:20ef2c4d-046f-4d6c-ad61-7a9796d385c0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7\"" Jan 17 12:07:51.199785 systemd-networkd[1392]: cali777b6961282: Gained IPv6LL Jan 17 12:07:51.200291 systemd-networkd[1392]: cali189d70d9487: Gained IPv6LL Jan 17 12:07:51.317196 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:38476.service - OpenSSH per-connection server daemon (10.0.0.1:38476). Jan 17 12:07:51.362484 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 38476 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:07:51.364306 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:51.368729 systemd-logind[1446]: New session 11 of user core. Jan 17 12:07:51.376646 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:07:51.481481 containerd[1461]: time="2025-01-17T12:07:51.480872880Z" level=info msg="StopPodSandbox for \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\"" Jan 17 12:07:51.481481 containerd[1461]: time="2025-01-17T12:07:51.481107170Z" level=info msg="StopPodSandbox for \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\"" Jan 17 12:07:51.525564 sshd[4550]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:51.536788 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:38476.service: Deactivated successfully. Jan 17 12:07:51.539588 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:07:51.544256 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:07:51.553933 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:38490.service - OpenSSH per-connection server daemon (10.0.0.1:38490). Jan 17 12:07:51.556643 systemd-logind[1446]: Removed session 11. Jan 17 12:07:51.613940 sshd[4611]: Accepted publickey for core from 10.0.0.1 port 38490 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:07:51.616153 sshd[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.571 [INFO][4588] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.572 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" iface="eth0" netns="/var/run/netns/cni-4b717b33-b97a-a656-7893-c32f2e3dc5f5" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.574 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" iface="eth0" netns="/var/run/netns/cni-4b717b33-b97a-a656-7893-c32f2e3dc5f5" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.574 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" iface="eth0" netns="/var/run/netns/cni-4b717b33-b97a-a656-7893-c32f2e3dc5f5" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.574 [INFO][4588] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.574 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.603 [INFO][4614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.603 [INFO][4614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.603 [INFO][4614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.608 [WARNING][4614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.608 [INFO][4614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.610 [INFO][4614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:51.616745 containerd[1461]: 2025-01-17 12:07:51.612 [INFO][4588] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:07:51.621392 systemd[1]: run-netns-cni\x2d4b717b33\x2db97a\x2da656\x2d7893\x2dc32f2e3dc5f5.mount: Deactivated successfully. Jan 17 12:07:51.622834 containerd[1461]: time="2025-01-17T12:07:51.622655808Z" level=info msg="TearDown network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\" successfully" Jan 17 12:07:51.622834 containerd[1461]: time="2025-01-17T12:07:51.622699940Z" level=info msg="StopPodSandbox for \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\" returns successfully" Jan 17 12:07:51.623565 containerd[1461]: time="2025-01-17T12:07:51.623507416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjd54,Uid:72572da8-b025-4046-8056-05fcf0914c02,Namespace:calico-system,Attempt:1,}" Jan 17 12:07:51.627313 systemd-logind[1446]: New session 12 of user core. Jan 17 12:07:51.631717 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.589 [INFO][4598] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.591 [INFO][4598] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" iface="eth0" netns="/var/run/netns/cni-f738947d-c783-f415-9670-31448ab1ed40" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.593 [INFO][4598] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" iface="eth0" netns="/var/run/netns/cni-f738947d-c783-f415-9670-31448ab1ed40" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.594 [INFO][4598] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" iface="eth0" netns="/var/run/netns/cni-f738947d-c783-f415-9670-31448ab1ed40" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.594 [INFO][4598] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.594 [INFO][4598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.630 [INFO][4620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.630 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.630 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.636 [WARNING][4620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.636 [INFO][4620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.637 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:51.648514 containerd[1461]: 2025-01-17 12:07:51.642 [INFO][4598] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:07:51.648514 containerd[1461]: time="2025-01-17T12:07:51.647793637Z" level=info msg="TearDown network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\" successfully" Jan 17 12:07:51.648514 containerd[1461]: time="2025-01-17T12:07:51.647821659Z" level=info msg="StopPodSandbox for \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\" returns successfully" Jan 17 12:07:51.648962 kubelet[2497]: E0117 12:07:51.648318 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:51.650185 systemd[1]: run-netns-cni\x2df738947d\x2dc783\x2df415\x2d9670\x2d31448ab1ed40.mount: Deactivated successfully. Jan 17 12:07:51.651935 containerd[1461]: time="2025-01-17T12:07:51.650652162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d5h6c,Uid:6566e330-1ca2-4086-a201-75a74be50141,Namespace:kube-system,Attempt:1,}" Jan 17 12:07:51.670296 kubelet[2497]: E0117 12:07:51.670260 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:51.826434 systemd-networkd[1392]: cali5df4e1958ab: Link UP Jan 17 12:07:51.828356 systemd-networkd[1392]: cali5df4e1958ab: Gained carrier Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.711 [INFO][4635] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pjd54-eth0 csi-node-driver- calico-system 72572da8-b025-4046-8056-05fcf0914c02 899 0 2025-01-17 12:07:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pjd54 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5df4e1958ab [] []}} ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Namespace="calico-system" Pod="csi-node-driver-pjd54" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjd54-" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.712 [INFO][4635] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Namespace="calico-system" Pod="csi-node-driver-pjd54" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.764 [INFO][4661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" HandleID="k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.780 [INFO][4661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" HandleID="k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362950), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pjd54", "timestamp":"2025-01-17 12:07:51.764288855 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.780 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.780 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.780 [INFO][4661] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.784 [INFO][4661] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.795 [INFO][4661] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.800 [INFO][4661] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.801 [INFO][4661] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.804 [INFO][4661] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.804 [INFO][4661] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.806 [INFO][4661] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02 Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.809 [INFO][4661] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.816 [INFO][4661] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.816 [INFO][4661] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" host="localhost" Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.816 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:51.852890 containerd[1461]: 2025-01-17 12:07:51.816 [INFO][4661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" HandleID="k8s-pod-network.7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.852830 sshd[4611]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:51.854084 containerd[1461]: 2025-01-17 12:07:51.821 [INFO][4635] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Namespace="calico-system" Pod="csi-node-driver-pjd54" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjd54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjd54-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72572da8-b025-4046-8056-05fcf0914c02", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pjd54", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5df4e1958ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:51.854084 containerd[1461]: 2025-01-17 12:07:51.822 [INFO][4635] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Namespace="calico-system" Pod="csi-node-driver-pjd54" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.854084 containerd[1461]: 2025-01-17 12:07:51.822 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5df4e1958ab ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Namespace="calico-system" Pod="csi-node-driver-pjd54" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.854084 containerd[1461]: 2025-01-17 12:07:51.828 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Namespace="calico-system" Pod="csi-node-driver-pjd54" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.854084 containerd[1461]: 2025-01-17 12:07:51.829 [INFO][4635] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Namespace="calico-system" Pod="csi-node-driver-pjd54" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjd54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjd54-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72572da8-b025-4046-8056-05fcf0914c02", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02", Pod:"csi-node-driver-pjd54", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5df4e1958ab", MAC:"de:d0:40:7f:95:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:51.854084 containerd[1461]: 2025-01-17 12:07:51.843 [INFO][4635] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02" Namespace="calico-system" Pod="csi-node-driver-pjd54" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:07:51.868782 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:38490.service: Deactivated successfully. Jan 17 12:07:51.872382 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:07:51.874046 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:07:51.884282 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:38492.service - OpenSSH per-connection server daemon (10.0.0.1:38492). Jan 17 12:07:51.886397 systemd-logind[1446]: Removed session 12. Jan 17 12:07:51.955489 containerd[1461]: time="2025-01-17T12:07:51.949538561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:51.955489 containerd[1461]: time="2025-01-17T12:07:51.949624201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:51.955489 containerd[1461]: time="2025-01-17T12:07:51.949665749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:51.955489 containerd[1461]: time="2025-01-17T12:07:51.949771738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:51.973761 systemd[1]: Started cri-containerd-7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02.scope - libcontainer container 7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02. Jan 17 12:07:51.992293 systemd-networkd[1392]: calida00858d6dd: Link UP Jan 17 12:07:51.993809 systemd-networkd[1392]: calida00858d6dd: Gained carrier Jan 17 12:07:51.994667 sshd[4698]: Accepted publickey for core from 10.0.0.1 port 38492 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:07:51.994805 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:07:51.996195 sshd[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:52.009870 systemd-logind[1446]: New session 13 of user core. Jan 17 12:07:52.015715 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.735 [INFO][4643] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0 coredns-6f6b679f8f- kube-system 6566e330-1ca2-4086-a201-75a74be50141 900 0 2025-01-17 12:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-d5h6c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida00858d6dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-d5h6c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--d5h6c-" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.735 [INFO][4643] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-d5h6c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.778 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" HandleID="k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.793 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" HandleID="k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00055ec50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-d5h6c", "timestamp":"2025-01-17 12:07:51.778803436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.794 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.816 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.816 [INFO][4668] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.885 [INFO][4668] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.927 [INFO][4668] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.945 [INFO][4668] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.950 [INFO][4668] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.959 [INFO][4668] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.959 [INFO][4668] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.963 [INFO][4668] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.971 [INFO][4668] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.982 [INFO][4668] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.982 [INFO][4668] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" host="localhost" Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.982 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:07:52.017561 containerd[1461]: 2025-01-17 12:07:51.983 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" HandleID="k8s-pod-network.24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:52.018303 containerd[1461]: 2025-01-17 12:07:51.988 [INFO][4643] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-d5h6c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6566e330-1ca2-4086-a201-75a74be50141", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-d5h6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida00858d6dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:52.018303 containerd[1461]: 2025-01-17 12:07:51.988 [INFO][4643] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-d5h6c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:52.018303 containerd[1461]: 2025-01-17 12:07:51.988 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida00858d6dd ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-d5h6c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:52.018303 containerd[1461]: 2025-01-17 12:07:51.992 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-d5h6c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:52.018303 containerd[1461]: 2025-01-17 12:07:51.995 [INFO][4643] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-d5h6c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6566e330-1ca2-4086-a201-75a74be50141", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e", Pod:"coredns-6f6b679f8f-d5h6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida00858d6dd", MAC:"f6:fc:95:4a:1d:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:07:52.018303 containerd[1461]: 2025-01-17 12:07:52.007 [INFO][4643] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-d5h6c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:07:52.024397 containerd[1461]: time="2025-01-17T12:07:52.022254862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjd54,Uid:72572da8-b025-4046-8056-05fcf0914c02,Namespace:calico-system,Attempt:1,} returns sandbox id \"7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02\"" Jan 17 12:07:52.053662 containerd[1461]: time="2025-01-17T12:07:52.053495442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:52.053662 containerd[1461]: time="2025-01-17T12:07:52.053609375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:52.053662 containerd[1461]: time="2025-01-17T12:07:52.053629303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:52.054023 containerd[1461]: time="2025-01-17T12:07:52.053754958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:52.095217 systemd[1]: Started cri-containerd-24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e.scope - libcontainer container 24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e. Jan 17 12:07:52.114482 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:07:52.151498 containerd[1461]: time="2025-01-17T12:07:52.149797864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d5h6c,Uid:6566e330-1ca2-4086-a201-75a74be50141,Namespace:kube-system,Attempt:1,} returns sandbox id \"24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e\"" Jan 17 12:07:52.153368 kubelet[2497]: E0117 12:07:52.153324 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:52.155783 containerd[1461]: time="2025-01-17T12:07:52.155727943Z" level=info msg="CreateContainer within sandbox \"24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:07:52.159882 systemd-networkd[1392]: cali5950463bfad: Gained IPv6LL Jan 17 12:07:52.184248 sshd[4698]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:52.189309 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:38492.service: Deactivated successfully. Jan 17 12:07:52.191433 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:07:52.192201 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:07:52.193227 systemd-logind[1446]: Removed session 13. Jan 17 12:07:52.256057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571435012.mount: Deactivated successfully. Jan 17 12:07:52.262558 containerd[1461]: time="2025-01-17T12:07:52.262499331Z" level=info msg="CreateContainer within sandbox \"24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"754ac00e9a03e40869d39b345cc7c9cf891e8eba6ef636b9279a5a39cefeba76\"" Jan 17 12:07:52.263155 containerd[1461]: time="2025-01-17T12:07:52.263114875Z" level=info msg="StartContainer for \"754ac00e9a03e40869d39b345cc7c9cf891e8eba6ef636b9279a5a39cefeba76\"" Jan 17 12:07:52.304802 systemd[1]: Started cri-containerd-754ac00e9a03e40869d39b345cc7c9cf891e8eba6ef636b9279a5a39cefeba76.scope - libcontainer container 754ac00e9a03e40869d39b345cc7c9cf891e8eba6ef636b9279a5a39cefeba76. Jan 17 12:07:52.366937 containerd[1461]: time="2025-01-17T12:07:52.366781598Z" level=info msg="StartContainer for \"754ac00e9a03e40869d39b345cc7c9cf891e8eba6ef636b9279a5a39cefeba76\" returns successfully" Jan 17 12:07:52.675879 kubelet[2497]: E0117 12:07:52.675777 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:52.775934 kubelet[2497]: I0117 12:07:52.775855 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-d5h6c" podStartSLOduration=42.77583448 podStartE2EDuration="42.77583448s" podCreationTimestamp="2025-01-17 12:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:52.775317309 +0000 UTC m=+48.391609456" watchObservedRunningTime="2025-01-17 12:07:52.77583448 +0000 UTC m=+48.392126617" Jan 17 12:07:53.079451 containerd[1461]: time="2025-01-17T12:07:53.079355723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:53.080443 containerd[1461]: time="2025-01-17T12:07:53.080327667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:07:53.082030 containerd[1461]: time="2025-01-17T12:07:53.081967705Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:53.084353 containerd[1461]: time="2025-01-17T12:07:53.084313366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:53.085109 containerd[1461]: time="2025-01-17T12:07:53.085057253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.491654715s" Jan 17 12:07:53.085109 containerd[1461]: time="2025-01-17T12:07:53.085100203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:07:53.086611 containerd[1461]: time="2025-01-17T12:07:53.086572967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:07:53.088015 containerd[1461]: time="2025-01-17T12:07:53.087978344Z" level=info msg="CreateContainer within sandbox \"a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:07:53.103188 containerd[1461]: time="2025-01-17T12:07:53.103138062Z" level=info msg="CreateContainer within sandbox \"a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2bbb7218a59564fad0de234568f14315552f4726ec970af16653f4188023182\"" Jan 17 12:07:53.103829 containerd[1461]: time="2025-01-17T12:07:53.103788153Z" level=info msg="StartContainer for \"f2bbb7218a59564fad0de234568f14315552f4726ec970af16653f4188023182\"" Jan 17 12:07:53.163699 systemd[1]: Started cri-containerd-f2bbb7218a59564fad0de234568f14315552f4726ec970af16653f4188023182.scope - libcontainer container f2bbb7218a59564fad0de234568f14315552f4726ec970af16653f4188023182. Jan 17 12:07:53.212982 containerd[1461]: time="2025-01-17T12:07:53.212916827Z" level=info msg="StartContainer for \"f2bbb7218a59564fad0de234568f14315552f4726ec970af16653f4188023182\" returns successfully" Jan 17 12:07:53.440699 systemd-networkd[1392]: cali5df4e1958ab: Gained IPv6LL Jan 17 12:07:53.679094 kubelet[2497]: E0117 12:07:53.679046 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:53.887820 systemd-networkd[1392]: calida00858d6dd: Gained IPv6LL Jan 17 12:07:54.680261 kubelet[2497]: I0117 12:07:54.680226 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:07:54.680766 kubelet[2497]: E0117 12:07:54.680585 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:07:55.411632 kubelet[2497]: I0117 12:07:55.410870 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9d5f4547c-shv4q" podStartSLOduration=34.917924282 podStartE2EDuration="38.410855463s" podCreationTimestamp="2025-01-17 12:07:17 +0000 UTC" firstStartedPulling="2025-01-17 12:07:49.593062068 +0000 UTC m=+45.209354205" lastFinishedPulling="2025-01-17 12:07:53.085993249 +0000 UTC m=+48.702285386" observedRunningTime="2025-01-17 12:07:53.691632354 +0000 UTC m=+49.307924491" watchObservedRunningTime="2025-01-17 12:07:55.410855463 +0000 UTC m=+51.027147600" Jan 17 12:07:56.932034 containerd[1461]: time="2025-01-17T12:07:56.931947399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:56.960367 containerd[1461]: time="2025-01-17T12:07:56.960285854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:07:56.990286 containerd[1461]: time="2025-01-17T12:07:56.990205094Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:57.020888 containerd[1461]: time="2025-01-17T12:07:57.020817925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:57.021819 containerd[1461]: time="2025-01-17T12:07:57.021767557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.935149005s" Jan 17 12:07:57.022384 containerd[1461]: time="2025-01-17T12:07:57.022351713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:07:57.023814 containerd[1461]: time="2025-01-17T12:07:57.023757711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:07:57.032988 containerd[1461]: time="2025-01-17T12:07:57.032697314Z" level=info msg="CreateContainer within sandbox \"a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:07:57.172517 containerd[1461]: time="2025-01-17T12:07:57.172463763Z" level=info msg="CreateContainer within sandbox \"a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2e1b1c91532f25456493e69e20409ac412cfec0e13d8b07d4237f4d82968b945\"" Jan 17 12:07:57.173082 containerd[1461]: time="2025-01-17T12:07:57.173061595Z" level=info msg="StartContainer for \"2e1b1c91532f25456493e69e20409ac412cfec0e13d8b07d4237f4d82968b945\"" Jan 17 12:07:57.203483 systemd[1]: Started cri-containerd-2e1b1c91532f25456493e69e20409ac412cfec0e13d8b07d4237f4d82968b945.scope - libcontainer container 2e1b1c91532f25456493e69e20409ac412cfec0e13d8b07d4237f4d82968b945. Jan 17 12:07:57.205183 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:38498.service - OpenSSH per-connection server daemon (10.0.0.1:38498). Jan 17 12:07:57.251781 sshd[4936]: Accepted publickey for core from 10.0.0.1 port 38498 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:07:57.254199 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:57.257321 containerd[1461]: time="2025-01-17T12:07:57.257161534Z" level=info msg="StartContainer for \"2e1b1c91532f25456493e69e20409ac412cfec0e13d8b07d4237f4d82968b945\" returns successfully" Jan 17 12:07:57.259971 systemd-logind[1446]: New session 14 of user core. Jan 17 12:07:57.267105 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:07:57.404897 sshd[4936]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:57.410402 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:38498.service: Deactivated successfully. Jan 17 12:07:57.414001 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:07:57.415034 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:07:57.416199 systemd-logind[1446]: Removed session 14. Jan 17 12:07:57.524276 containerd[1461]: time="2025-01-17T12:07:57.524110070Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:57.525427 containerd[1461]: time="2025-01-17T12:07:57.525338003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:07:57.528078 containerd[1461]: time="2025-01-17T12:07:57.528047598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 504.123474ms" Jan 17 12:07:57.528148 containerd[1461]: time="2025-01-17T12:07:57.528082724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:07:57.529363 containerd[1461]: time="2025-01-17T12:07:57.529334973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:07:57.530467 containerd[1461]: time="2025-01-17T12:07:57.530433574Z" level=info msg="CreateContainer within sandbox \"703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:07:57.548182 containerd[1461]: time="2025-01-17T12:07:57.548021054Z" level=info msg="CreateContainer within sandbox \"703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"66241f6c73e202c94f5c95b2836acc704b379cb337b92008b3a2a989b4dc7cd7\"" Jan 17 12:07:57.548922 containerd[1461]: time="2025-01-17T12:07:57.548887570Z" level=info msg="StartContainer for \"66241f6c73e202c94f5c95b2836acc704b379cb337b92008b3a2a989b4dc7cd7\"" Jan 17 12:07:57.582866 systemd[1]: Started cri-containerd-66241f6c73e202c94f5c95b2836acc704b379cb337b92008b3a2a989b4dc7cd7.scope - libcontainer container 66241f6c73e202c94f5c95b2836acc704b379cb337b92008b3a2a989b4dc7cd7. Jan 17 12:07:57.631812 containerd[1461]: time="2025-01-17T12:07:57.631739498Z" level=info msg="StartContainer for \"66241f6c73e202c94f5c95b2836acc704b379cb337b92008b3a2a989b4dc7cd7\" returns successfully" Jan 17 12:07:57.703602 kubelet[2497]: I0117 12:07:57.702912 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d7dd95494-9j7cw" podStartSLOduration=33.829184327 podStartE2EDuration="40.702892604s" podCreationTimestamp="2025-01-17 12:07:17 +0000 UTC" firstStartedPulling="2025-01-17 12:07:50.149622473 +0000 UTC m=+45.765914610" lastFinishedPulling="2025-01-17 12:07:57.02333075 +0000 UTC m=+52.639622887" observedRunningTime="2025-01-17 12:07:57.702656552 +0000 UTC m=+53.318948689" watchObservedRunningTime="2025-01-17 12:07:57.702892604 +0000 UTC m=+53.319184741" Jan 17 12:07:57.725639 kubelet[2497]: I0117 12:07:57.724801 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9d5f4547c-bmklj" podStartSLOduration=34.262643238 podStartE2EDuration="40.72478386s" podCreationTimestamp="2025-01-17 12:07:17 +0000 UTC" firstStartedPulling="2025-01-17 12:07:51.066812033 +0000 UTC m=+46.683104170" lastFinishedPulling="2025-01-17 12:07:57.528952655 +0000 UTC m=+53.145244792" observedRunningTime="2025-01-17 12:07:57.723183237 +0000 UTC m=+53.339475374" watchObservedRunningTime="2025-01-17 12:07:57.72478386 +0000 UTC m=+53.341075997" Jan 17 12:07:58.712391 kubelet[2497]: I0117 12:07:58.712334 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:07:59.885079 containerd[1461]: time="2025-01-17T12:07:59.884997423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:59.885865 containerd[1461]: time="2025-01-17T12:07:59.885820037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:07:59.886929 containerd[1461]: time="2025-01-17T12:07:59.886886408Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:59.889157 containerd[1461]: time="2025-01-17T12:07:59.889114007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:59.889943 containerd[1461]: time="2025-01-17T12:07:59.889894731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.360439853s" Jan 17 12:07:59.889943 containerd[1461]: time="2025-01-17T12:07:59.889942291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:07:59.892192 containerd[1461]: time="2025-01-17T12:07:59.892159591Z" level=info msg="CreateContainer within sandbox \"7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:07:59.950199 containerd[1461]: time="2025-01-17T12:07:59.950119606Z" level=info msg="CreateContainer within sandbox \"7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d7fad99de3146b305170e41cd305e6582429f472f378a8f856fd94995d4af28e\"" Jan 17 12:07:59.950778 containerd[1461]: time="2025-01-17T12:07:59.950739310Z" level=info msg="StartContainer for \"d7fad99de3146b305170e41cd305e6582429f472f378a8f856fd94995d4af28e\"" Jan 17 12:07:59.990760 systemd[1]: Started cri-containerd-d7fad99de3146b305170e41cd305e6582429f472f378a8f856fd94995d4af28e.scope - libcontainer container d7fad99de3146b305170e41cd305e6582429f472f378a8f856fd94995d4af28e. Jan 17 12:08:00.029063 containerd[1461]: time="2025-01-17T12:08:00.028682376Z" level=info msg="StartContainer for \"d7fad99de3146b305170e41cd305e6582429f472f378a8f856fd94995d4af28e\" returns successfully" Jan 17 12:08:00.030718 containerd[1461]: time="2025-01-17T12:08:00.030657161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:08:01.689409 kubelet[2497]: I0117 12:08:01.689349 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:08:02.428838 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:60366.service - OpenSSH per-connection server daemon (10.0.0.1:60366). Jan 17 12:08:02.861270 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 60366 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:02.863762 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:02.868781 systemd-logind[1446]: New session 15 of user core. Jan 17 12:08:02.877688 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:08:03.057861 sshd[5087]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:03.062044 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:60366.service: Deactivated successfully. Jan 17 12:08:03.064410 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:08:03.065105 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:08:03.065999 systemd-logind[1446]: Removed session 15. Jan 17 12:08:03.114578 containerd[1461]: time="2025-01-17T12:08:03.114394655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:03.115958 containerd[1461]: time="2025-01-17T12:08:03.115873206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:08:03.117611 containerd[1461]: time="2025-01-17T12:08:03.117571992Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:03.120151 containerd[1461]: time="2025-01-17T12:08:03.120098751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:03.120794 containerd[1461]: time="2025-01-17T12:08:03.120764609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.090061773s" Jan 17 12:08:03.120846 containerd[1461]: time="2025-01-17T12:08:03.120798845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:08:03.122936 containerd[1461]: time="2025-01-17T12:08:03.122905712Z" level=info msg="CreateContainer within sandbox \"7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:08:03.137810 containerd[1461]: time="2025-01-17T12:08:03.137753790Z" level=info msg="CreateContainer within sandbox \"7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8cc2ac8c5ef114e42e3f4547269b20a2bdb8b0309c97d8b679ff1be9eea3ef49\"" Jan 17 12:08:03.139666 containerd[1461]: time="2025-01-17T12:08:03.138775236Z" level=info msg="StartContainer for \"8cc2ac8c5ef114e42e3f4547269b20a2bdb8b0309c97d8b679ff1be9eea3ef49\"" Jan 17 12:08:03.174672 systemd[1]: Started cri-containerd-8cc2ac8c5ef114e42e3f4547269b20a2bdb8b0309c97d8b679ff1be9eea3ef49.scope - libcontainer container 8cc2ac8c5ef114e42e3f4547269b20a2bdb8b0309c97d8b679ff1be9eea3ef49. Jan 17 12:08:03.205054 containerd[1461]: time="2025-01-17T12:08:03.205003665Z" level=info msg="StartContainer for \"8cc2ac8c5ef114e42e3f4547269b20a2bdb8b0309c97d8b679ff1be9eea3ef49\" returns successfully" Jan 17 12:08:03.582847 kubelet[2497]: I0117 12:08:03.582796 2497 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:08:03.582847 kubelet[2497]: I0117 12:08:03.582848 2497 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:08:04.460630 containerd[1461]: time="2025-01-17T12:08:04.460572058Z" level=info msg="StopPodSandbox for \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\"" Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.499 [WARNING][5155] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3401b0dc-0111-43e3-9a45-29e2c23e6b1c", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891", Pod:"coredns-6f6b679f8f-bbv8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcb313a6660", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.500 [INFO][5155] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.500 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" iface="eth0" netns="" Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.500 [INFO][5155] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.500 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.527 [INFO][5165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.527 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.527 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.533 [WARNING][5165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.533 [INFO][5165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.535 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:04.542790 containerd[1461]: 2025-01-17 12:08:04.538 [INFO][5155] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:08:04.543240 containerd[1461]: time="2025-01-17T12:08:04.542832691Z" level=info msg="TearDown network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\" successfully" Jan 17 12:08:04.543240 containerd[1461]: time="2025-01-17T12:08:04.542868811Z" level=info msg="StopPodSandbox for \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\" returns successfully" Jan 17 12:08:04.552672 containerd[1461]: time="2025-01-17T12:08:04.552610243Z" level=info msg="RemovePodSandbox for \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\"" Jan 17 12:08:04.555555 containerd[1461]: time="2025-01-17T12:08:04.555499498Z" level=info msg="Forcibly stopping sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\"" Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.605 [WARNING][5188] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3401b0dc-0111-43e3-9a45-29e2c23e6b1c", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0be08e84483bff256ac1482d0cc9c0fdb9b045843190e4ce95ee3d47624da891", Pod:"coredns-6f6b679f8f-bbv8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcb313a6660", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.605 [INFO][5188] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.605 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" iface="eth0" netns="" Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.605 [INFO][5188] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.605 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.627 [INFO][5195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.627 [INFO][5195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.627 [INFO][5195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.634 [WARNING][5195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.634 [INFO][5195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" HandleID="k8s-pod-network.2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Workload="localhost-k8s-coredns--6f6b679f8f--bbv8n-eth0" Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.636 [INFO][5195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:04.642306 containerd[1461]: 2025-01-17 12:08:04.639 [INFO][5188] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e" Jan 17 12:08:04.642823 containerd[1461]: time="2025-01-17T12:08:04.642325526Z" level=info msg="TearDown network for sandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\" successfully" Jan 17 12:08:04.669228 containerd[1461]: time="2025-01-17T12:08:04.669122429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:04.669437 containerd[1461]: time="2025-01-17T12:08:04.669262149Z" level=info msg="RemovePodSandbox \"2917bec8ecfe89521c9b6503e34c406a8f3c6e9e0ac11a63185f0a0fc1838b3e\" returns successfully" Jan 17 12:08:04.670207 containerd[1461]: time="2025-01-17T12:08:04.670150476Z" level=info msg="StopPodSandbox for \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\"" Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.717 [WARNING][5218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0", GenerateName:"calico-kube-controllers-5d7dd95494-", Namespace:"calico-system", SelfLink:"", UID:"e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d7dd95494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd", Pod:"calico-kube-controllers-5d7dd95494-9j7cw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali777b6961282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.718 [INFO][5218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.718 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" iface="eth0" netns="" Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.718 [INFO][5218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.718 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.747 [INFO][5225] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.748 [INFO][5225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.748 [INFO][5225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.754 [WARNING][5225] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.754 [INFO][5225] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.756 [INFO][5225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:04.765457 containerd[1461]: 2025-01-17 12:08:04.760 [INFO][5218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:08:04.765457 containerd[1461]: time="2025-01-17T12:08:04.764396899Z" level=info msg="TearDown network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\" successfully" Jan 17 12:08:04.765457 containerd[1461]: time="2025-01-17T12:08:04.764425234Z" level=info msg="StopPodSandbox for \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\" returns successfully" Jan 17 12:08:04.765457 containerd[1461]: time="2025-01-17T12:08:04.765055532Z" level=info msg="RemovePodSandbox for \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\"" Jan 17 12:08:04.765457 containerd[1461]: time="2025-01-17T12:08:04.765102293Z" level=info msg="Forcibly stopping sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\"" Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.802 [WARNING][5250] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0", GenerateName:"calico-kube-controllers-5d7dd95494-", Namespace:"calico-system", SelfLink:"", UID:"e3d4f78c-7b70-42ea-b36d-e9b2418ba7c2", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d7dd95494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2fb133c13c30ec6c9ae244c2b04bbfcc86cf1fc0e0ad56a390f0f7d8cb4ebbd", Pod:"calico-kube-controllers-5d7dd95494-9j7cw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali777b6961282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.802 [INFO][5250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.802 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" iface="eth0" netns="" Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.803 [INFO][5250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.803 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.825 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.826 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.826 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.831 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.831 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" HandleID="k8s-pod-network.7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Workload="localhost-k8s-calico--kube--controllers--5d7dd95494--9j7cw-eth0" Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.833 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:04.838747 containerd[1461]: 2025-01-17 12:08:04.835 [INFO][5250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9" Jan 17 12:08:04.839259 containerd[1461]: time="2025-01-17T12:08:04.838790734Z" level=info msg="TearDown network for sandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\" successfully" Jan 17 12:08:04.843208 containerd[1461]: time="2025-01-17T12:08:04.843154029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:04.843208 containerd[1461]: time="2025-01-17T12:08:04.843216519Z" level=info msg="RemovePodSandbox \"7284ce941a700e854e32b78bc12dadc006d3c6d9ed3cde026be1367dcb9084e9\" returns successfully" Jan 17 12:08:04.843917 containerd[1461]: time="2025-01-17T12:08:04.843862218Z" level=info msg="StopPodSandbox for \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\"" Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.883 [WARNING][5279] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjd54-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72572da8-b025-4046-8056-05fcf0914c02", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02", Pod:"csi-node-driver-pjd54", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5df4e1958ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.884 [INFO][5279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.884 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" iface="eth0" netns="" Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.884 [INFO][5279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.884 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.909 [INFO][5286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.909 [INFO][5286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.909 [INFO][5286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.917 [WARNING][5286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.917 [INFO][5286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.919 [INFO][5286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:04.927063 containerd[1461]: 2025-01-17 12:08:04.923 [INFO][5279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:08:04.927813 containerd[1461]: time="2025-01-17T12:08:04.927120990Z" level=info msg="TearDown network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\" successfully" Jan 17 12:08:04.927813 containerd[1461]: time="2025-01-17T12:08:04.927161839Z" level=info msg="StopPodSandbox for \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\" returns successfully" Jan 17 12:08:04.927936 containerd[1461]: time="2025-01-17T12:08:04.927889827Z" level=info msg="RemovePodSandbox for \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\"" Jan 17 12:08:04.928001 containerd[1461]: time="2025-01-17T12:08:04.927947859Z" level=info msg="Forcibly stopping sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\"" Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:04.970 [WARNING][5309] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjd54-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72572da8-b025-4046-8056-05fcf0914c02", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7852500325cd65f19b48e159e61b62cfba9cf4d63f65acbb7980af820b798e02", Pod:"csi-node-driver-pjd54", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5df4e1958ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:04.971 [INFO][5309] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:04.971 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" iface="eth0" netns="" Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:04.971 [INFO][5309] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:04.971 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:04.995 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:04.995 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:04.995 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:05.001 [WARNING][5317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:05.001 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" HandleID="k8s-pod-network.8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Workload="localhost-k8s-csi--node--driver--pjd54-eth0" Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:05.003 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:05.009070 containerd[1461]: 2025-01-17 12:08:05.006 [INFO][5309] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346" Jan 17 12:08:05.009618 containerd[1461]: time="2025-01-17T12:08:05.009149372Z" level=info msg="TearDown network for sandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\" successfully" Jan 17 12:08:05.014245 containerd[1461]: time="2025-01-17T12:08:05.014207161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:05.014300 containerd[1461]: time="2025-01-17T12:08:05.014278249Z" level=info msg="RemovePodSandbox \"8d8d488da15693c39c60f1fde841cbc08c40cd26192b9630f80d44361558a346\" returns successfully" Jan 17 12:08:05.014914 containerd[1461]: time="2025-01-17T12:08:05.014888307Z" level=info msg="StopPodSandbox for \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\"" Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.058 [WARNING][5340] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0", GenerateName:"calico-apiserver-9d5f4547c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cf03d78-f4b6-4259-9d2e-ac6c275e4392", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d5f4547c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8", Pod:"calico-apiserver-9d5f4547c-shv4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali189d70d9487", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.058 [INFO][5340] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.058 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" iface="eth0" netns="" Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.058 [INFO][5340] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.058 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.086 [INFO][5347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.086 [INFO][5347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.086 [INFO][5347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.093 [WARNING][5347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.093 [INFO][5347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.096 [INFO][5347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:05.102679 containerd[1461]: 2025-01-17 12:08:05.099 [INFO][5340] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:08:05.103222 containerd[1461]: time="2025-01-17T12:08:05.102749096Z" level=info msg="TearDown network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\" successfully" Jan 17 12:08:05.103222 containerd[1461]: time="2025-01-17T12:08:05.102784484Z" level=info msg="StopPodSandbox for \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\" returns successfully" Jan 17 12:08:05.103391 containerd[1461]: time="2025-01-17T12:08:05.103361289Z" level=info msg="RemovePodSandbox for \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\"" Jan 17 12:08:05.103391 containerd[1461]: time="2025-01-17T12:08:05.103396026Z" level=info msg="Forcibly stopping sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\"" Jan 17 12:08:05.153948 kubelet[2497]: E0117 12:08:05.153896 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:08:05.169979 kubelet[2497]: I0117 12:08:05.169895 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pjd54" podStartSLOduration=37.079135408 podStartE2EDuration="48.169871828s" podCreationTimestamp="2025-01-17 12:07:17 +0000 UTC" firstStartedPulling="2025-01-17 12:07:52.030924784 +0000 UTC m=+47.647216921" lastFinishedPulling="2025-01-17 12:08:03.121661204 +0000 UTC m=+58.737953341" observedRunningTime="2025-01-17 12:08:03.739639245 +0000 UTC m=+59.355931382" watchObservedRunningTime="2025-01-17 12:08:05.169871828 +0000 UTC m=+60.786163965" Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.149 [WARNING][5391] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0", GenerateName:"calico-apiserver-9d5f4547c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cf03d78-f4b6-4259-9d2e-ac6c275e4392", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d5f4547c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2019654f05de282e591be4a09b2ef717d408736167afb149ba0c5df77350cd8", Pod:"calico-apiserver-9d5f4547c-shv4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali189d70d9487", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.149 [INFO][5391] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.149 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" iface="eth0" netns="" Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.149 [INFO][5391] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.149 [INFO][5391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.183 [INFO][5399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.183 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.183 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.188 [WARNING][5399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.188 [INFO][5399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" HandleID="k8s-pod-network.66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Workload="localhost-k8s-calico--apiserver--9d5f4547c--shv4q-eth0" Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.189 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:05.195223 containerd[1461]: 2025-01-17 12:08:05.192 [INFO][5391] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd" Jan 17 12:08:05.195713 containerd[1461]: time="2025-01-17T12:08:05.195287200Z" level=info msg="TearDown network for sandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\" successfully" Jan 17 12:08:05.199760 containerd[1461]: time="2025-01-17T12:08:05.199720402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:05.199804 containerd[1461]: time="2025-01-17T12:08:05.199791650Z" level=info msg="RemovePodSandbox \"66664b6002da3c6575d3574c9ddb41fb10da528d31f8baed60e602b733b884dd\" returns successfully" Jan 17 12:08:05.200361 containerd[1461]: time="2025-01-17T12:08:05.200318177Z" level=info msg="StopPodSandbox for \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\"" Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.244 [WARNING][5422] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6566e330-1ca2-4086-a201-75a74be50141", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e", Pod:"coredns-6f6b679f8f-d5h6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida00858d6dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.245 [INFO][5422] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.245 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" iface="eth0" netns="" Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.245 [INFO][5422] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.245 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.267 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.267 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.267 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.273 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.273 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.275 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:05.281734 containerd[1461]: 2025-01-17 12:08:05.278 [INFO][5422] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:08:05.282258 containerd[1461]: time="2025-01-17T12:08:05.281778223Z" level=info msg="TearDown network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\" successfully" Jan 17 12:08:05.282258 containerd[1461]: time="2025-01-17T12:08:05.281809653Z" level=info msg="StopPodSandbox for \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\" returns successfully" Jan 17 12:08:05.282450 containerd[1461]: time="2025-01-17T12:08:05.282412668Z" level=info msg="RemovePodSandbox for \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\"" Jan 17 12:08:05.282450 containerd[1461]: time="2025-01-17T12:08:05.282445301Z" level=info msg="Forcibly stopping sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\"" Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.319 [WARNING][5452] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6566e330-1ca2-4086-a201-75a74be50141", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24b3dc1300077faf4d284885505776796be6de26344a9bd0e3fbbb5287e9ae8e", Pod:"coredns-6f6b679f8f-d5h6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida00858d6dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.319 [INFO][5452] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.319 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" iface="eth0" netns="" Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.319 [INFO][5452] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.319 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.340 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.340 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.340 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.371 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.371 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" HandleID="k8s-pod-network.98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Workload="localhost-k8s-coredns--6f6b679f8f--d5h6c-eth0" Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.373 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:05.378011 containerd[1461]: 2025-01-17 12:08:05.375 [INFO][5452] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0" Jan 17 12:08:05.378011 containerd[1461]: time="2025-01-17T12:08:05.377991666Z" level=info msg="TearDown network for sandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\" successfully" Jan 17 12:08:05.382966 containerd[1461]: time="2025-01-17T12:08:05.382935906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:05.383065 containerd[1461]: time="2025-01-17T12:08:05.382990071Z" level=info msg="RemovePodSandbox \"98fcbaeaf0d6cf0d0a0f52f3d4edb6102b40e9efb37e3ed38ec5a5aba4d296e0\" returns successfully" Jan 17 12:08:05.383669 containerd[1461]: time="2025-01-17T12:08:05.383630608Z" level=info msg="StopPodSandbox for \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\"" Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.430 [WARNING][5481] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0", GenerateName:"calico-apiserver-9d5f4547c-", Namespace:"calico-apiserver", SelfLink:"", UID:"20ef2c4d-046f-4d6c-ad61-7a9796d385c0", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d5f4547c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7", Pod:"calico-apiserver-9d5f4547c-bmklj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5950463bfad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.430 [INFO][5481] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.430 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" iface="eth0" netns="" Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.430 [INFO][5481] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.430 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.456 [INFO][5488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.456 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.456 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.464 [WARNING][5488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.464 [INFO][5488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.466 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:05.473127 containerd[1461]: 2025-01-17 12:08:05.469 [INFO][5481] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:08:05.474404 containerd[1461]: time="2025-01-17T12:08:05.474106188Z" level=info msg="TearDown network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\" successfully" Jan 17 12:08:05.474404 containerd[1461]: time="2025-01-17T12:08:05.474146365Z" level=info msg="StopPodSandbox for \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\" returns successfully" Jan 17 12:08:05.474881 containerd[1461]: time="2025-01-17T12:08:05.474837570Z" level=info msg="RemovePodSandbox for \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\"" Jan 17 12:08:05.474952 containerd[1461]: time="2025-01-17T12:08:05.474889891Z" level=info msg="Forcibly stopping sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\"" Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.519 [WARNING][5518] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0", GenerateName:"calico-apiserver-9d5f4547c-", Namespace:"calico-apiserver", SelfLink:"", UID:"20ef2c4d-046f-4d6c-ad61-7a9796d385c0", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d5f4547c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"703958b0a784e586349bcfd66676bb8ed43dc1f04156861456b47a1032014de7", Pod:"calico-apiserver-9d5f4547c-bmklj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5950463bfad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.520 [INFO][5518] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.520 [INFO][5518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" iface="eth0" netns="" Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.520 [INFO][5518] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.520 [INFO][5518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.545 [INFO][5537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.545 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.545 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.552 [WARNING][5537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.552 [INFO][5537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" HandleID="k8s-pod-network.a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Workload="localhost-k8s-calico--apiserver--9d5f4547c--bmklj-eth0" Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.554 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:05.561433 containerd[1461]: 2025-01-17 12:08:05.558 [INFO][5518] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d" Jan 17 12:08:05.562071 containerd[1461]: time="2025-01-17T12:08:05.561484615Z" level=info msg="TearDown network for sandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\" successfully" Jan 17 12:08:05.567170 containerd[1461]: time="2025-01-17T12:08:05.567115862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:05.567249 containerd[1461]: time="2025-01-17T12:08:05.567184806Z" level=info msg="RemovePodSandbox \"a74a8bd4355bfff95fe876c28ff34cdb4f6dc6b4177b5ca5db767c93a7a9292d\" returns successfully" Jan 17 12:08:08.069893 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:41770.service - OpenSSH per-connection server daemon (10.0.0.1:41770). Jan 17 12:08:08.118171 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 41770 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:08.120036 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:08.124247 systemd-logind[1446]: New session 16 of user core. Jan 17 12:08:08.132732 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:08:08.262982 sshd[5545]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:08.267984 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:41770.service: Deactivated successfully. Jan 17 12:08:08.270459 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:08:08.271168 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:08:08.272227 systemd-logind[1446]: Removed session 16. Jan 17 12:08:13.275777 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:41784.service - OpenSSH per-connection server daemon (10.0.0.1:41784). Jan 17 12:08:13.319253 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 41784 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:13.321181 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:13.326094 systemd-logind[1446]: New session 17 of user core. Jan 17 12:08:13.335783 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:08:13.453949 sshd[5569]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:13.457930 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:41784.service: Deactivated successfully. Jan 17 12:08:13.460096 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:08:13.460837 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:08:13.461846 systemd-logind[1446]: Removed session 17. Jan 17 12:08:18.465898 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:50144.service - OpenSSH per-connection server daemon (10.0.0.1:50144). Jan 17 12:08:18.508689 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 50144 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:18.510382 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:18.515264 systemd-logind[1446]: New session 18 of user core. Jan 17 12:08:18.523671 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:08:18.644407 sshd[5584]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:18.652572 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:50144.service: Deactivated successfully. Jan 17 12:08:18.654711 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:08:18.656316 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:08:18.661202 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:50150.service - OpenSSH per-connection server daemon (10.0.0.1:50150). Jan 17 12:08:18.662162 systemd-logind[1446]: Removed session 18. Jan 17 12:08:18.700428 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 50150 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:18.702342 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:18.707486 systemd-logind[1446]: New session 19 of user core. Jan 17 12:08:18.714681 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:08:19.178160 sshd[5598]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:19.187774 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:50150.service: Deactivated successfully. Jan 17 12:08:19.189900 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:08:19.191549 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:08:19.197302 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:50158.service - OpenSSH per-connection server daemon (10.0.0.1:50158). Jan 17 12:08:19.198552 systemd-logind[1446]: Removed session 19. Jan 17 12:08:19.235803 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 50158 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:19.237692 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:19.242130 systemd-logind[1446]: New session 20 of user core. Jan 17 12:08:19.249682 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:08:20.944771 sshd[5610]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:20.955914 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:50158.service: Deactivated successfully. Jan 17 12:08:20.958410 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:08:20.961388 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:08:20.971254 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:50170.service - OpenSSH per-connection server daemon (10.0.0.1:50170). Jan 17 12:08:20.973760 systemd-logind[1446]: Removed session 20. Jan 17 12:08:21.029897 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 50170 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:21.031638 sshd[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:21.036320 systemd-logind[1446]: New session 21 of user core. Jan 17 12:08:21.045688 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:08:21.263184 sshd[5634]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:21.276496 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:50170.service: Deactivated successfully. Jan 17 12:08:21.279341 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:08:21.281380 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:08:21.290002 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:50182.service - OpenSSH per-connection server daemon (10.0.0.1:50182). Jan 17 12:08:21.291269 systemd-logind[1446]: Removed session 21. Jan 17 12:08:21.326130 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 50182 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:21.328258 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:21.333912 systemd-logind[1446]: New session 22 of user core. Jan 17 12:08:21.343723 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:08:21.467415 sshd[5646]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:21.472606 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:50182.service: Deactivated successfully. Jan 17 12:08:21.475338 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:08:21.476120 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:08:21.477075 systemd-logind[1446]: Removed session 22. Jan 17 12:08:21.478780 kubelet[2497]: E0117 12:08:21.478750 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:08:26.480261 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:50186.service - OpenSSH per-connection server daemon (10.0.0.1:50186). Jan 17 12:08:26.534548 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 50186 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:26.536784 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:26.542194 systemd-logind[1446]: New session 23 of user core. Jan 17 12:08:26.551706 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:08:26.668705 sshd[5663]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:26.673892 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:50186.service: Deactivated successfully. Jan 17 12:08:26.676227 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:08:26.677147 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:08:26.678151 systemd-logind[1446]: Removed session 23. Jan 17 12:08:30.481559 kubelet[2497]: E0117 12:08:30.479620 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:08:31.686161 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:55390.service - OpenSSH per-connection server daemon (10.0.0.1:55390). Jan 17 12:08:31.731000 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 55390 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:31.733336 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:31.738512 systemd-logind[1446]: New session 24 of user core. Jan 17 12:08:31.743771 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:08:31.870846 sshd[5683]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:31.877378 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:55390.service: Deactivated successfully. Jan 17 12:08:31.881799 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:08:31.882869 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:08:31.884568 systemd-logind[1446]: Removed session 24. Jan 17 12:08:36.887307 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:55404.service - OpenSSH per-connection server daemon (10.0.0.1:55404). Jan 17 12:08:36.933102 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 55404 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:36.935137 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:36.940873 systemd-logind[1446]: New session 25 of user core. Jan 17 12:08:36.949713 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:08:37.084280 sshd[5742]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:37.090065 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:55404.service: Deactivated successfully. Jan 17 12:08:37.093479 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:08:37.094617 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:08:37.095737 systemd-logind[1446]: Removed session 25. Jan 17 12:08:38.479095 kubelet[2497]: E0117 12:08:38.479042 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:08:42.097331 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:34116.service - OpenSSH per-connection server daemon (10.0.0.1:34116). Jan 17 12:08:42.137778 sshd[5759]: Accepted publickey for core from 10.0.0.1 port 34116 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:08:42.139816 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:42.144452 systemd-logind[1446]: New session 26 of user core. Jan 17 12:08:42.152720 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:08:42.268441 sshd[5759]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:42.272612 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:34116.service: Deactivated successfully. Jan 17 12:08:42.274955 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:08:42.275727 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:08:42.277132 systemd-logind[1446]: Removed session 26.