Jan 29 11:53:53.955216 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:53:53.955246 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:53:53.955258 kernel: BIOS-provided physical RAM map: Jan 29 11:53:53.955265 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:53:53.955271 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:53:53.955300 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:53:53.955316 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:53:53.955325 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:53:53.955334 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 29 11:53:53.955343 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 29 11:53:53.955355 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 29 11:53:53.955361 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 29 11:53:53.955372 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 29 11:53:53.955380 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 29 11:53:53.955394 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 29 11:53:53.955404 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:53:53.955418 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 29 11:53:53.955428 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 29 11:53:53.955437 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:53:53.955444 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:53:53.955452 kernel: NX (Execute Disable) protection: active Jan 29 11:53:53.955461 kernel: APIC: Static calls initialized Jan 29 11:53:53.955471 kernel: efi: EFI v2.7 by EDK II Jan 29 11:53:53.955481 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 29 11:53:53.955490 kernel: SMBIOS 2.8 present. Jan 29 11:53:53.955498 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 29 11:53:53.955505 kernel: Hypervisor detected: KVM Jan 29 11:53:53.955517 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:53:53.955524 kernel: kvm-clock: using sched offset of 6298251156 cycles Jan 29 11:53:53.955531 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:53:53.955538 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:53:53.955545 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:53:53.955555 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:53:53.955566 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 29 11:53:53.955574 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:53:53.955581 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:53:53.955592 kernel: Using GB pages for direct mapping Jan 29 11:53:53.955599 kernel: Secure boot disabled Jan 29 11:53:53.955606 kernel: ACPI: Early table checksum verification disabled Jan 29 11:53:53.955613 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 11:53:53.955624 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:53:53.955631 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:53:53.955639 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:53:53.955649 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 11:53:53.955656 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:53:53.955668 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:53:53.955675 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:53:53.955687 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:53:53.955702 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:53:53.955711 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 11:53:53.955726 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 11:53:53.955735 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 11:53:53.955745 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 11:53:53.955753 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 11:53:53.955760 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 11:53:53.955767 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 11:53:53.955774 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 11:53:53.955782 kernel: No NUMA configuration found Jan 29 11:53:53.955793 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 29 11:53:53.955804 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 29 11:53:53.955813 kernel: Zone ranges: Jan 29 11:53:53.955821 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:53:53.955830 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 29 11:53:53.955853 kernel: Normal empty Jan 29 11:53:53.955860 kernel: Movable zone start for each node Jan 29 11:53:53.955867 kernel: Early memory node ranges Jan 29 11:53:53.955875 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:53:53.955882 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 11:53:53.955889 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 11:53:53.955900 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 29 11:53:53.955907 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 29 11:53:53.955914 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 29 11:53:53.955924 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 29 11:53:53.955932 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:53:53.955939 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:53:53.955946 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 11:53:53.955954 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:53:53.955961 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 29 11:53:53.955971 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 29 11:53:53.955978 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 29 11:53:53.955986 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:53:53.955993 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:53:53.956000 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:53:53.956007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:53:53.956015 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:53:53.956022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:53:53.956029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:53:53.956039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:53:53.956046 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:53:53.956054 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:53:53.956061 kernel: TSC deadline timer available Jan 29 11:53:53.956068 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:53:53.956075 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:53:53.956083 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:53:53.956090 kernel: kvm-guest: setup PV sched yield Jan 29 11:53:53.956097 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 11:53:53.956107 kernel: Booting paravirtualized kernel on KVM Jan 29 11:53:53.956115 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:53:53.956122 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:53:53.956129 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:53:53.956137 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:53:53.956144 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:53:53.956151 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:53:53.956158 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:53:53.956167 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:53:53.956179 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:53:53.956186 kernel: random: crng init done Jan 29 11:53:53.956194 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:53:53.956201 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:53:53.956209 kernel: Fallback order for Node 0: 0 Jan 29 11:53:53.956216 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 29 11:53:53.956223 kernel: Policy zone: DMA32 Jan 29 11:53:53.956230 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:53:53.956238 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 29 11:53:53.956248 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:53:53.956256 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:53:53.956263 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:53:53.956270 kernel: Dynamic Preempt: voluntary Jan 29 11:53:53.956294 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:53:53.956305 kernel: rcu: RCU event tracing is enabled. Jan 29 11:53:53.956313 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:53:53.956321 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:53:53.956328 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:53:53.956336 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:53:53.956344 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:53:53.956354 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:53:53.956362 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:53:53.956372 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:53:53.956379 kernel: Console: colour dummy device 80x25 Jan 29 11:53:53.956387 kernel: printk: console [ttyS0] enabled Jan 29 11:53:53.956397 kernel: ACPI: Core revision 20230628 Jan 29 11:53:53.956405 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:53:53.956413 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:53:53.956421 kernel: x2apic enabled Jan 29 11:53:53.956428 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:53:53.956436 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:53:53.956444 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:53:53.956451 kernel: kvm-guest: setup PV IPIs Jan 29 11:53:53.956459 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:53:53.956469 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:53:53.956477 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:53:53.956485 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:53:53.956492 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:53:53.956500 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:53:53.956508 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:53:53.956516 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:53:53.956531 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:53:53.956543 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:53:53.956559 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:53:53.956569 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:53:53.956579 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:53:53.956590 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:53:53.956605 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:53:53.956615 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:53:53.956623 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:53:53.956631 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:53:53.956642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:53:53.956650 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:53:53.956657 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:53:53.956665 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:53:53.956672 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:53:53.956680 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:53:53.956687 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:53:53.956695 kernel: landlock: Up and running. Jan 29 11:53:53.956702 kernel: SELinux: Initializing. Jan 29 11:53:53.956710 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:53:53.956720 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:53:53.956728 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:53:53.956736 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:53:53.956743 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:53:53.956751 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:53:53.956759 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:53:53.956766 kernel: ... version: 0 Jan 29 11:53:53.956774 kernel: ... bit width: 48 Jan 29 11:53:53.956784 kernel: ... generic registers: 6 Jan 29 11:53:53.956791 kernel: ... value mask: 0000ffffffffffff Jan 29 11:53:53.956799 kernel: ... max period: 00007fffffffffff Jan 29 11:53:53.956807 kernel: ... fixed-purpose events: 0 Jan 29 11:53:53.956814 kernel: ... event mask: 000000000000003f Jan 29 11:53:53.956822 kernel: signal: max sigframe size: 1776 Jan 29 11:53:53.956829 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:53:53.956855 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:53:53.956864 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:53:53.956876 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:53:53.956884 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:53:53.956892 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:53:53.956899 kernel: smpboot: Max logical packages: 1 Jan 29 11:53:53.956907 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:53:53.956914 kernel: devtmpfs: initialized Jan 29 11:53:53.956924 kernel: x86/mm: Memory block size: 128MB Jan 29 11:53:53.956942 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 11:53:53.956953 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 11:53:53.956963 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 29 11:53:53.956978 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 11:53:53.956988 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 11:53:53.956999 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:53:53.957007 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:53:53.957015 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:53:53.957023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:53:53.957030 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:53:53.957038 kernel: audit: type=2000 audit(1738151633.130:1): state=initialized audit_enabled=0 res=1 Jan 29 11:53:53.957049 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:53:53.957056 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:53:53.957064 kernel: cpuidle: using governor menu Jan 29 11:53:53.957071 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:53:53.957079 kernel: dca service started, version 1.12.1 Jan 29 11:53:53.957087 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:53:53.957094 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:53:53.957102 kernel: PCI: Using configuration type 1 for base access Jan 29 11:53:53.957110 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:53:53.957121 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:53:53.957128 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:53:53.957136 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:53:53.957143 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:53:53.957151 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:53:53.957158 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:53:53.957166 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:53:53.957173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:53:53.957181 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:53:53.957191 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:53:53.957199 kernel: ACPI: Interpreter enabled Jan 29 11:53:53.957206 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:53:53.957214 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:53:53.957222 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:53:53.957229 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:53:53.957237 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:53:53.957244 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:53:53.957503 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:53:53.957662 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:53:53.957820 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:53:53.957850 kernel: PCI host bridge to bus 0000:00 Jan 29 11:53:53.958053 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:53:53.958221 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:53:53.958358 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:53:53.958485 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:53:53.958603 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:53:53.958719 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 29 11:53:53.958868 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:53:53.959026 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:53:53.959169 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:53:53.959307 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 11:53:53.959440 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 11:53:53.959568 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 11:53:53.959694 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 11:53:53.959864 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:53:53.960017 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:53:53.960148 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 11:53:53.960290 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 11:53:53.960420 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 29 11:53:53.960596 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:53:53.960745 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 11:53:53.960894 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 11:53:53.961024 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 29 11:53:53.961162 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:53:53.961308 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 11:53:53.961436 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 11:53:53.961573 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 29 11:53:53.961713 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 11:53:53.961895 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:53:53.962027 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:53:53.962200 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:53:53.962348 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 11:53:53.962477 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 11:53:53.962653 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:53:53.962799 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 11:53:53.962811 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:53:53.962819 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:53:53.962827 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:53:53.962855 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:53:53.962868 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:53:53.962876 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:53:53.962884 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:53:53.962893 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:53:53.962906 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:53:53.962921 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:53:53.962932 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:53:53.962943 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:53:53.962953 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:53:53.962966 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:53:53.962974 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:53:53.962982 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:53:53.962990 kernel: iommu: Default domain type: Translated Jan 29 11:53:53.962997 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:53:53.963005 kernel: efivars: Registered efivars operations Jan 29 11:53:53.963012 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:53:53.963020 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:53:53.963028 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 11:53:53.963039 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 29 11:53:53.963046 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 29 11:53:53.963054 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 29 11:53:53.963199 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:53:53.963338 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:53:53.963466 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:53:53.963477 kernel: vgaarb: loaded Jan 29 11:53:53.963485 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:53:53.963496 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:53:53.963504 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:53:53.963512 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:53:53.963520 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:53:53.963527 kernel: pnp: PnP ACPI init Jan 29 11:53:53.963753 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:53:53.963786 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:53:53.963794 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:53:53.963826 kernel: NET: Registered PF_INET protocol family Jan 29 11:53:53.963851 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:53:53.963859 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:53:53.963867 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:53:53.963875 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:53:53.963883 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:53:53.963890 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:53:53.963898 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:53:53.963905 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:53:53.963917 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:53:53.963925 kernel: NET: Registered PF_XDP protocol family Jan 29 11:53:53.964086 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 11:53:53.964230 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 11:53:53.964361 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:53:53.964477 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:53:53.964594 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:53:53.964710 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:53:53.964904 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:53:53.965025 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 29 11:53:53.965036 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:53:53.965044 kernel: Initialise system trusted keyrings Jan 29 11:53:53.965052 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:53:53.965059 kernel: Key type asymmetric registered Jan 29 11:53:53.965067 kernel: Asymmetric key parser 'x509' registered Jan 29 11:53:53.965075 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:53:53.965083 kernel: io scheduler mq-deadline registered Jan 29 11:53:53.965095 kernel: io scheduler kyber registered Jan 29 11:53:53.965103 kernel: io scheduler bfq registered Jan 29 11:53:53.965111 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:53:53.965119 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:53:53.965127 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:53:53.965134 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:53:53.965142 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:53:53.965150 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:53:53.965158 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:53:53.965169 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:53:53.965177 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:53:53.965324 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:53:53.965341 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:53:53.965464 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:53:53.965584 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:53:53 UTC (1738151633) Jan 29 11:53:53.965702 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:53:53.965712 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:53:53.965725 kernel: efifb: probing for efifb Jan 29 11:53:53.965733 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 29 11:53:53.965741 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 29 11:53:53.965748 kernel: efifb: scrolling: redraw Jan 29 11:53:53.965757 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 29 11:53:53.965765 kernel: Console: switching to colour frame buffer device 100x37 Jan 29 11:53:53.965790 kernel: fb0: EFI VGA frame buffer device Jan 29 11:53:53.965801 kernel: pstore: Using crash dump compression: deflate Jan 29 11:53:53.965809 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:53:53.965820 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:53:53.965828 kernel: Segment Routing with IPv6 Jan 29 11:53:53.965851 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:53:53.965859 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:53:53.965867 kernel: Key type dns_resolver registered Jan 29 11:53:53.965875 kernel: IPI shorthand broadcast: enabled Jan 29 11:53:53.965883 kernel: sched_clock: Marking stable (1050003531, 128241858)->(1245740803, -67495414) Jan 29 11:53:53.965891 kernel: registered taskstats version 1 Jan 29 11:53:53.965899 kernel: Loading compiled-in X.509 certificates Jan 29 11:53:53.965911 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:53:53.965919 kernel: Key type .fscrypt registered Jan 29 11:53:53.965927 kernel: Key type fscrypt-provisioning registered Jan 29 11:53:53.965935 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:53:53.965944 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:53:53.965952 kernel: ima: No architecture policies found Jan 29 11:53:53.965960 kernel: clk: Disabling unused clocks Jan 29 11:53:53.965968 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:53:53.965976 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:53:53.965987 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:53:53.965995 kernel: Run /init as init process Jan 29 11:53:53.966003 kernel: with arguments: Jan 29 11:53:53.966011 kernel: /init Jan 29 11:53:53.966019 kernel: with environment: Jan 29 11:53:53.966027 kernel: HOME=/ Jan 29 11:53:53.966035 kernel: TERM=linux Jan 29 11:53:53.966043 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:53:53.966053 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:53:53.966066 systemd[1]: Detected virtualization kvm. Jan 29 11:53:53.966075 systemd[1]: Detected architecture x86-64. Jan 29 11:53:53.966084 systemd[1]: Running in initrd. Jan 29 11:53:53.966094 systemd[1]: No hostname configured, using default hostname. Jan 29 11:53:53.966105 systemd[1]: Hostname set to <localhost>. Jan 29 11:53:53.966114 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:53:53.966123 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:53:53.966132 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:53:53.966140 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:53:53.966149 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:53:53.966158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:53:53.966169 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:53:53.966178 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:53:53.966189 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:53:53.966198 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:53:53.966206 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:53:53.966215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:53:53.966223 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:53:53.966235 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:53:53.966243 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:53:53.966252 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:53:53.966261 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:53:53.966269 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:53:53.966285 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:53:53.966294 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:53:53.966303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:53:53.966312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:53:53.966323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:53:53.966332 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:53:53.966340 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:53:53.966349 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:53:53.966358 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:53:53.966366 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:53:53.966375 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:53:53.966383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:53:53.966394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:53:53.966403 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:53:53.966412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:53:53.966441 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 11:53:53.966467 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:53:53.966477 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:53:53.966493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:53:53.966506 systemd-journald[193]: Journal started Jan 29 11:53:53.966527 systemd-journald[193]: Runtime Journal (/run/log/journal/1f1fbcd8fe6847538240d027898233f2) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:53:53.967703 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:53:53.969932 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:53:53.970327 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:53:53.983994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:53:53.987714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:53:53.991334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:53:53.999878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:53:54.002903 kernel: Bridge firewalling registered Jan 29 11:53:54.002737 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:53:54.004761 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:53:54.005564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:53:54.018008 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:53:54.020389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:53:54.023242 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:53:54.026058 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:53:54.031479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:53:54.035256 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:53:54.045097 dracut-cmdline[225]: dracut-dracut-053 Jan 29 11:53:54.049900 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:53:54.089042 systemd-resolved[228]: Positive Trust Anchors: Jan 29 11:53:54.089065 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:53:54.089097 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:53:54.092348 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 29 11:53:54.093800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:53:54.099973 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:53:54.152905 kernel: SCSI subsystem initialized Jan 29 11:53:54.164872 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:53:54.177896 kernel: iscsi: registered transport (tcp) Jan 29 11:53:54.205310 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:53:54.205431 kernel: QLogic iSCSI HBA Driver Jan 29 11:53:54.276205 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:53:54.287120 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:53:54.319895 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:53:54.319977 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:53:54.319991 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:53:54.363881 kernel: raid6: avx2x4 gen() 29923 MB/s Jan 29 11:53:54.380861 kernel: raid6: avx2x2 gen() 29071 MB/s Jan 29 11:53:54.397961 kernel: raid6: avx2x1 gen() 24918 MB/s Jan 29 11:53:54.397994 kernel: raid6: using algorithm avx2x4 gen() 29923 MB/s Jan 29 11:53:54.416150 kernel: raid6: .... xor() 6550 MB/s, rmw enabled Jan 29 11:53:54.416188 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:53:54.436879 kernel: xor: automatically using best checksumming function avx Jan 29 11:53:54.602884 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:53:54.619546 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:53:54.631281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:53:54.645081 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 29 11:53:54.650307 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:53:54.659997 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:53:54.674921 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 29 11:53:54.714821 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:53:54.726090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:53:54.809107 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:53:54.816019 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:53:54.841232 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:53:54.844980 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:53:54.850015 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:53:54.884478 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:53:54.884661 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:53:54.884678 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:53:54.884692 kernel: GPT:9289727 != 19775487 Jan 29 11:53:54.884705 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:53:54.884718 kernel: GPT:9289727 != 19775487 Jan 29 11:53:54.884731 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:53:54.884744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:53:54.884758 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:53:54.884777 kernel: AES CTR mode by8 optimization enabled Jan 29 11:53:54.848668 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:53:54.851172 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:53:54.861136 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:53:54.884773 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:53:54.894584 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:53:54.894659 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:53:54.899416 kernel: libata version 3.00 loaded. Jan 29 11:53:54.900193 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:53:54.904046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:53:54.904126 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:53:54.904607 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:53:54.913325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:53:54.922520 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:53:54.956895 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Jan 29 11:53:54.956953 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:53:54.956971 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:53:54.957269 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:53:54.957464 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (472) Jan 29 11:53:54.957481 kernel: scsi host0: ahci Jan 29 11:53:54.957663 kernel: scsi host1: ahci Jan 29 11:53:54.957863 kernel: scsi host2: ahci Jan 29 11:53:54.958017 kernel: scsi host3: ahci Jan 29 11:53:54.958167 kernel: scsi host4: ahci Jan 29 11:53:54.958330 kernel: scsi host5: ahci Jan 29 11:53:54.958489 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 11:53:54.958502 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 11:53:54.958512 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 11:53:54.958528 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 11:53:54.958539 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 11:53:54.958550 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 11:53:54.927570 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:53:54.946321 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:53:54.948736 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:53:54.968227 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:53:54.975140 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:53:54.978596 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:53:54.993075 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:53:54.996773 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:53:55.003486 disk-uuid[567]: Primary Header is updated. Jan 29 11:53:55.003486 disk-uuid[567]: Secondary Entries is updated. Jan 29 11:53:55.003486 disk-uuid[567]: Secondary Header is updated. Jan 29 11:53:55.007179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:53:55.011869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:53:55.028001 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:53:55.265899 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:53:55.265994 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:53:55.266007 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:53:55.266888 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:53:55.267880 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:53:55.268861 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:53:55.269871 kernel: ata3.00: applying bridge limits Jan 29 11:53:55.269895 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:53:55.270866 kernel: ata3.00: configured for UDMA/100 Jan 29 11:53:55.272875 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:53:55.312051 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:53:55.325205 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:53:55.325227 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:53:56.017726 disk-uuid[569]: The operation has completed successfully. Jan 29 11:53:56.019254 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:53:56.049382 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:53:56.049512 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:53:56.097173 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:53:56.101461 sh[594]: Success Jan 29 11:53:56.128872 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:53:56.175135 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:53:56.190485 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:53:56.194874 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:53:56.210786 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:53:56.210861 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:53:56.210890 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:53:56.211983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:53:56.214100 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:53:56.226399 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:53:56.229059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:53:56.239206 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:53:56.242727 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:53:56.252190 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:53:56.252246 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:53:56.252260 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:53:56.256868 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:53:56.268935 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:53:56.271349 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:53:56.288601 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:53:56.297045 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:53:56.384713 ignition[686]: Ignition 2.19.0 Jan 29 11:53:56.384737 ignition[686]: Stage: fetch-offline Jan 29 11:53:56.384797 ignition[686]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:53:56.384810 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:53:56.384976 ignition[686]: parsed url from cmdline: "" Jan 29 11:53:56.384981 ignition[686]: no config URL provided Jan 29 11:53:56.384988 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:53:56.385001 ignition[686]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:53:56.385039 ignition[686]: op(1): [started] loading QEMU firmware config module Jan 29 11:53:56.385046 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:53:56.396656 ignition[686]: op(1): [finished] loading QEMU firmware config module Jan 29 11:53:56.420486 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:53:56.435248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:53:56.446356 ignition[686]: parsing config with SHA512: ff699ae40fb6ceac564bdbc899b222d8e59205886dee1ea296ab497dd8c367372349170d94c70139566faff29e8bd8bbe448ce2efb2155628ca48daf589710dd Jan 29 11:53:56.450965 unknown[686]: fetched base config from "system" Jan 29 11:53:56.451397 unknown[686]: fetched user config from "qemu" Jan 29 11:53:56.451801 ignition[686]: fetch-offline: fetch-offline passed Jan 29 11:53:56.451887 ignition[686]: Ignition finished successfully Jan 29 11:53:56.455259 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:53:56.468038 systemd-networkd[782]: lo: Link UP Jan 29 11:53:56.468049 systemd-networkd[782]: lo: Gained carrier Jan 29 11:53:56.471138 systemd-networkd[782]: Enumeration completed Jan 29 11:53:56.471829 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:53:56.473596 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:53:56.473600 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:53:56.474644 systemd-networkd[782]: eth0: Link UP Jan 29 11:53:56.474648 systemd-networkd[782]: eth0: Gained carrier Jan 29 11:53:56.474655 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:53:56.475150 systemd[1]: Reached target network.target - Network. Jan 29 11:53:56.477867 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:53:56.486175 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:53:56.499983 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:53:56.505984 ignition[785]: Ignition 2.19.0 Jan 29 11:53:56.505998 ignition[785]: Stage: kargs Jan 29 11:53:56.506181 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:53:56.506194 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:53:56.507103 ignition[785]: kargs: kargs passed Jan 29 11:53:56.507158 ignition[785]: Ignition finished successfully Jan 29 11:53:56.516021 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:53:56.529318 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:53:56.546886 ignition[794]: Ignition 2.19.0 Jan 29 11:53:56.546911 ignition[794]: Stage: disks Jan 29 11:53:56.547140 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:53:56.547157 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:53:56.548205 ignition[794]: disks: disks passed Jan 29 11:53:56.548298 ignition[794]: Ignition finished successfully Jan 29 11:53:56.555672 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:53:56.558553 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:53:56.558861 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:53:56.559490 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:53:56.560087 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:53:56.560490 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:53:56.583124 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:53:56.597154 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:53:56.764365 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:53:56.778143 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:53:56.889865 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:53:56.890428 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:53:56.892799 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:53:56.911020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:53:56.914205 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:53:56.920093 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:53:56.920161 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:53:56.921953 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:53:56.928528 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:53:56.934060 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Jan 29 11:53:56.934095 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:53:56.934110 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:53:56.934124 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:53:56.936869 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:53:56.940036 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:53:56.944101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:53:57.026783 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:53:57.033967 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:53:57.038151 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:53:57.080258 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:53:57.185004 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:53:57.214151 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:53:57.229882 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:53:57.231517 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:53:57.254064 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:53:57.276827 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:53:57.296514 ignition[928]: INFO : Ignition 2.19.0 Jan 29 11:53:57.296514 ignition[928]: INFO : Stage: mount Jan 29 11:53:57.302398 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:53:57.302398 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:53:57.305364 ignition[928]: INFO : mount: mount passed Jan 29 11:53:57.306196 ignition[928]: INFO : Ignition finished successfully Jan 29 11:53:57.309980 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:53:57.333005 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:53:57.342751 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:53:57.381923 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 29 11:53:57.402150 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:53:57.402244 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:53:57.402258 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:53:57.406887 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:53:57.410085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:53:57.459405 ignition[956]: INFO : Ignition 2.19.0 Jan 29 11:53:57.459405 ignition[956]: INFO : Stage: files Jan 29 11:53:57.462180 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:53:57.462180 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:53:57.462180 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:53:57.462180 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:53:57.462180 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:53:57.481615 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:53:57.481615 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:53:57.481615 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:53:57.481615 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:53:57.481615 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 11:53:57.478880 unknown[956]: wrote ssh authorized keys file for user: core Jan 29 11:53:57.524532 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:53:57.726297 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:53:57.728634 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:53:57.728634 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:53:57.728634 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:53:57.728634 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:53:57.728634 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:53:57.728634 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:53:57.728634 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:53:57.756035 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:53:57.756035 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:53:57.756035 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:53:57.756035 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:53:57.756035 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:53:57.756035 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:53:57.756035 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 11:53:57.810062 systemd-networkd[782]: eth0: Gained IPv6LL Jan 29 11:53:58.095274 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:53:58.866253 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:53:58.866253 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:53:58.870358 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:53:58.870358 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:53:58.870358 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:53:58.870358 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:53:58.870358 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:53:58.870358 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:53:58.870358 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:53:58.870358 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:53:58.948495 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:53:58.957411 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:53:58.959306 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:53:58.959306 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:53:58.959306 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:53:58.959306 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:53:58.959306 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:53:58.959306 ignition[956]: INFO : files: files passed Jan 29 11:53:58.959306 ignition[956]: INFO : Ignition finished successfully Jan 29 11:53:58.961017 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:53:58.966039 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:53:58.967270 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:53:58.979368 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:53:58.982978 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:53:58.982978 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:53:58.981030 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:53:58.991016 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:53:58.981199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:53:58.986420 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:53:58.988803 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:53:59.000037 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:53:59.043462 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:53:59.043621 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:53:59.046549 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:53:59.049176 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:53:59.049527 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:53:59.050611 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:53:59.076759 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:53:59.088284 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:53:59.101572 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:53:59.103209 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:53:59.103620 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:53:59.104221 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:53:59.104377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:53:59.105243 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:53:59.105629 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:53:59.106170 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:53:59.106822 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:53:59.107228 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:53:59.107636 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:53:59.108254 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:53:59.108648 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:53:59.109181 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:53:59.109526 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:53:59.109860 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:53:59.110016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:53:59.110774 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:53:59.111337 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:53:59.111657 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:53:59.111853 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:53:59.112273 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:53:59.164496 ignition[1012]: INFO : Ignition 2.19.0 Jan 29 11:53:59.164496 ignition[1012]: INFO : Stage: umount Jan 29 11:53:59.164496 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:53:59.164496 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:53:59.164496 ignition[1012]: INFO : umount: umount passed Jan 29 11:53:59.164496 ignition[1012]: INFO : Ignition finished successfully Jan 29 11:53:59.112445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:53:59.113143 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:53:59.113313 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:53:59.113821 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:53:59.114254 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:53:59.117938 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:53:59.118465 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:53:59.118959 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:53:59.119273 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:53:59.119374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:53:59.119856 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:53:59.119952 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:53:59.120493 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:53:59.120620 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:53:59.121180 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:53:59.121285 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:53:59.143554 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:53:59.145153 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:53:59.145361 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:53:59.149555 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:53:59.151035 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:53:59.151224 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:53:59.153334 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:53:59.153480 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:53:59.201188 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:53:59.203116 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:53:59.204268 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:53:59.208686 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:53:59.209832 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:53:59.213283 systemd[1]: Stopped target network.target - Network. Jan 29 11:53:59.215324 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:53:59.216481 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:53:59.218762 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:53:59.219859 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:53:59.222206 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:53:59.223277 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:53:59.225549 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:53:59.226664 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:53:59.229279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:53:59.231821 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:53:59.236895 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 29 11:53:59.239057 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:53:59.240260 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:53:59.243257 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:53:59.244521 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:53:59.249199 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:53:59.250317 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:53:59.265003 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:53:59.267254 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:53:59.268427 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:53:59.271617 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:53:59.272707 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:53:59.275138 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:53:59.276278 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:53:59.278817 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:53:59.278896 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:53:59.282930 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:53:59.294408 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:53:59.295559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:53:59.301912 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:53:59.342722 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:53:59.345971 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:53:59.346033 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:53:59.349442 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:53:59.350646 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:53:59.353278 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:53:59.353354 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:53:59.356555 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:53:59.357505 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:53:59.359818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:53:59.359905 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:53:59.377096 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:53:59.384480 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:53:59.384575 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:53:59.387792 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:53:59.387873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:53:59.388737 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:53:59.388884 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:53:59.599551 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:53:59.599746 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:53:59.602062 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:53:59.602379 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:53:59.602455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:53:59.617270 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:53:59.627728 systemd[1]: Switching root. Jan 29 11:53:59.658930 systemd-journald[193]: Journal stopped Jan 29 11:54:01.241317 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 11:54:01.241423 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:54:01.241443 kernel: SELinux: policy capability open_perms=1 Jan 29 11:54:01.241458 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:54:01.241474 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:54:01.241490 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:54:01.241509 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:54:01.241528 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:54:01.241543 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:54:01.241563 kernel: audit: type=1403 audit(1738151640.239:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:54:01.241594 systemd[1]: Successfully loaded SELinux policy in 44.264ms. Jan 29 11:54:01.241624 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.819ms. Jan 29 11:54:01.241643 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:54:01.241660 systemd[1]: Detected virtualization kvm. Jan 29 11:54:01.241677 systemd[1]: Detected architecture x86-64. Jan 29 11:54:01.241693 systemd[1]: Detected first boot. Jan 29 11:54:01.241713 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:54:01.241729 zram_generator::config[1055]: No configuration found. Jan 29 11:54:01.241757 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:54:01.241774 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:54:01.241790 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:54:01.241807 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:54:01.241824 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:54:01.241858 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:54:01.241876 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:54:01.241893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:54:01.241915 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:54:01.241932 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:54:01.241950 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:54:01.241966 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:54:01.241987 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:54:01.242011 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:54:01.242030 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:54:01.242047 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:54:01.242064 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:54:01.242085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:54:01.242101 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:54:01.242130 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:54:01.242153 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:54:01.242170 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:54:01.242191 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:54:01.242214 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:54:01.242235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:54:01.242261 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:54:01.242297 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:54:01.242332 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:54:01.242350 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:54:01.242367 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:54:01.242384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:54:01.242401 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:54:01.242417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:54:01.242434 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:54:01.242457 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:54:01.242473 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:54:01.242493 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:54:01.242510 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:01.242527 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:54:01.242543 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:54:01.242560 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:54:01.242578 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:54:01.242599 systemd[1]: Reached target machines.target - Containers. Jan 29 11:54:01.242615 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:54:01.242632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:54:01.242649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:54:01.242664 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:54:01.242680 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:54:01.242696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:54:01.242713 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:54:01.242733 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:54:01.242750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:54:01.242766 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:54:01.242782 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:54:01.242799 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:54:01.242816 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:54:01.242832 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:54:01.242868 kernel: fuse: init (API version 7.39) Jan 29 11:54:01.242885 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:54:01.242907 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:54:01.242924 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:54:01.242940 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:54:01.242957 kernel: ACPI: bus type drm_connector registered Jan 29 11:54:01.242972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:54:01.242988 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:54:01.243004 kernel: loop: module loaded Jan 29 11:54:01.243021 systemd[1]: Stopped verity-setup.service. Jan 29 11:54:01.243038 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:01.243059 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:54:01.243077 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:54:01.243094 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:54:01.243122 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:54:01.243139 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:54:01.243161 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:54:01.243177 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:54:01.243219 systemd-journald[1129]: Collecting audit messages is disabled. Jan 29 11:54:01.243254 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:54:01.243273 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:54:01.243292 systemd-journald[1129]: Journal started Jan 29 11:54:01.243326 systemd-journald[1129]: Runtime Journal (/run/log/journal/1f1fbcd8fe6847538240d027898233f2) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:54:00.919345 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:54:00.939698 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:54:00.940222 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:54:01.245671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:54:01.245701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:54:01.249781 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:54:01.251155 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:54:01.251411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:54:01.253226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:54:01.253485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:54:01.255498 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:54:01.255741 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:54:01.257580 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:54:01.257808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:54:01.259938 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:54:01.261926 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:54:01.263785 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:54:01.265780 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:54:01.281375 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:54:01.290915 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:54:01.293398 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:54:01.294675 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:54:01.294771 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:54:01.296933 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:54:01.299434 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:54:01.302563 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:54:01.303817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:54:01.306047 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:54:01.312591 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:54:01.314732 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:54:01.319001 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:54:01.320323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:54:01.323969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:54:01.329885 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:54:01.331079 systemd-journald[1129]: Time spent on flushing to /var/log/journal/1f1fbcd8fe6847538240d027898233f2 is 18.625ms for 989 entries. Jan 29 11:54:01.331079 systemd-journald[1129]: System Journal (/var/log/journal/1f1fbcd8fe6847538240d027898233f2) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:54:01.561570 systemd-journald[1129]: Received client request to flush runtime journal. Jan 29 11:54:01.561622 kernel: loop0: detected capacity change from 0 to 140768 Jan 29 11:54:01.561639 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:54:01.561654 kernel: loop1: detected capacity change from 0 to 218376 Jan 29 11:54:01.343881 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:54:01.363566 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:54:01.365121 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:54:01.366821 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:54:01.368514 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:54:01.377168 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:54:01.421988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:54:01.429707 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:54:01.472569 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:54:01.486270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:54:01.511548 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 29 11:54:01.511567 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 29 11:54:01.519181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:54:01.620957 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:54:01.622935 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:54:01.628275 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:54:01.639358 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:54:01.680875 kernel: loop2: detected capacity change from 0 to 142488 Jan 29 11:54:01.817167 kernel: loop3: detected capacity change from 0 to 140768 Jan 29 11:54:01.839912 kernel: loop4: detected capacity change from 0 to 218376 Jan 29 11:54:01.856163 kernel: loop5: detected capacity change from 0 to 142488 Jan 29 11:54:01.863383 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:54:01.864973 (sd-merge)[1192]: Merged extensions into '/usr'. Jan 29 11:54:01.935766 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:54:01.936056 systemd[1]: Reloading... Jan 29 11:54:02.064890 zram_generator::config[1231]: No configuration found. Jan 29 11:54:02.211006 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:54:02.273640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:54:02.274725 systemd[1]: Reloading finished in 338 ms. Jan 29 11:54:02.285672 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:54:02.319006 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:54:02.320954 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:54:02.344372 systemd[1]: Starting ensure-sysext.service... Jan 29 11:54:02.347923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:54:02.388831 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:54:02.389244 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:54:02.390291 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:54:02.390593 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 29 11:54:02.390678 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 29 11:54:02.394304 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:54:02.394316 systemd-tmpfiles[1257]: Skipping /boot Jan 29 11:54:02.395149 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:54:02.395168 systemd[1]: Reloading... Jan 29 11:54:02.408433 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:54:02.408448 systemd-tmpfiles[1257]: Skipping /boot Jan 29 11:54:02.451637 zram_generator::config[1285]: No configuration found. Jan 29 11:54:02.587273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:54:02.645544 systemd[1]: Reloading finished in 249 ms. Jan 29 11:54:02.665802 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:54:02.667649 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:54:02.679720 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:54:02.693574 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:54:02.696943 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:54:02.699696 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:54:02.707024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:54:02.712361 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:54:02.717176 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:54:02.722871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:02.723135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:54:02.725162 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:54:02.728518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:54:02.733373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:54:02.734766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:54:02.740177 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:54:02.741447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:02.745343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:02.745576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:54:02.745810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:54:02.745961 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:02.752735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:54:02.753051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:54:02.755339 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:54:02.755653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:54:02.763583 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:54:02.764962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:54:02.768151 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:54:02.775620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:02.776706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:54:02.780428 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 29 11:54:02.785222 augenrules[1355]: No rules Jan 29 11:54:02.785382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:54:02.786978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:54:02.788536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:54:02.789701 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:54:02.794966 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:54:02.796258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:02.798464 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:54:02.800812 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:54:02.801159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:54:02.810187 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:54:02.819436 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:54:02.833452 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:54:02.835924 systemd[1]: Finished ensure-sysext.service. Jan 29 11:54:02.837274 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:54:02.841650 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:54:02.853480 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:54:02.867409 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:54:02.868661 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:54:02.881041 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:54:02.888972 systemd-resolved[1329]: Positive Trust Anchors: Jan 29 11:54:02.888993 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:54:02.889028 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:54:02.893131 systemd-resolved[1329]: Defaulting to hostname 'linux'. Jan 29 11:54:02.895154 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:54:02.897980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:54:02.919928 systemd-networkd[1376]: lo: Link UP Jan 29 11:54:02.920309 systemd-networkd[1376]: lo: Gained carrier Jan 29 11:54:02.921400 systemd-networkd[1376]: Enumeration completed Jan 29 11:54:02.921637 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:54:02.923191 systemd[1]: Reached target network.target - Network. Jan 29 11:54:02.941280 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:54:02.955560 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:54:02.955716 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:54:02.957157 systemd-networkd[1376]: eth0: Link UP Jan 29 11:54:02.957230 systemd-networkd[1376]: eth0: Gained carrier Jan 29 11:54:02.957283 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:54:02.973681 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Jan 29 11:54:02.973911 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:54:02.987947 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:54:02.994628 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:54:02.996331 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:54:03.699692 systemd-resolved[1329]: Clock change detected. Flushing caches. Jan 29 11:54:03.699815 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:54:03.699860 systemd-timesyncd[1392]: Initial clock synchronization to Wed 2025-01-29 11:54:03.699620 UTC. Jan 29 11:54:03.703856 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:54:03.710473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:54:03.726420 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 11:54:03.726756 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:54:03.726967 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:54:03.727169 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:54:03.728482 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:54:03.724492 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:54:03.756991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:03.773527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:54:03.773947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:03.783064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:03.811867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:54:03.816890 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:54:03.905111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:03.942280 kernel: kvm_amd: TSC scaling supported Jan 29 11:54:03.942382 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:54:03.942402 kernel: kvm_amd: Nested Paging enabled Jan 29 11:54:03.942449 kernel: kvm_amd: LBR virtualization supported Jan 29 11:54:03.942870 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:54:03.943961 kernel: kvm_amd: Virtual GIF supported Jan 29 11:54:03.963820 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:54:03.993587 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:54:04.007045 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:54:04.017897 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:54:04.056072 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:54:04.057871 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:54:04.059106 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:54:04.060406 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:54:04.061759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:54:04.063364 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:54:04.064680 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:54:04.065991 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:54:04.067315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:54:04.067352 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:54:04.068327 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:54:04.070152 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:54:04.073663 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:54:04.086862 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:54:04.089459 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:54:04.091322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:54:04.092654 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:54:04.093768 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:54:04.094889 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:54:04.094928 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:54:04.096210 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:54:04.098937 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:54:04.103215 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:54:04.108356 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:54:04.109820 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:54:04.112658 jq[1426]: false Jan 29 11:54:04.113315 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:54:04.114025 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:54:04.118920 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:54:04.123978 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:54:04.127998 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:54:04.136902 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:54:04.138591 extend-filesystems[1427]: Found loop3 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found loop4 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found loop5 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found sr0 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found vda Jan 29 11:54:04.138591 extend-filesystems[1427]: Found vda1 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found vda2 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found vda3 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found usr Jan 29 11:54:04.138591 extend-filesystems[1427]: Found vda4 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found vda6 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found vda7 Jan 29 11:54:04.138591 extend-filesystems[1427]: Found vda9 Jan 29 11:54:04.138591 extend-filesystems[1427]: Checking size of /dev/vda9 Jan 29 11:54:04.139924 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:54:04.138716 dbus-daemon[1425]: [system] SELinux support is enabled Jan 29 11:54:04.140582 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:54:04.143097 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:54:04.151844 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:54:04.155203 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:54:04.159627 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:54:04.172051 jq[1441]: true Jan 29 11:54:04.163232 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:54:04.163500 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:54:04.163935 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:54:04.164204 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:54:04.169428 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:54:04.169727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:54:04.183622 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:54:04.183653 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:54:04.185359 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:54:04.185379 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:54:04.189160 extend-filesystems[1427]: Resized partition /dev/vda9 Jan 29 11:54:04.192537 jq[1449]: true Jan 29 11:54:04.194864 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:54:04.202591 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:54:04.206422 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:54:04.208840 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1381) Jan 29 11:54:04.210617 update_engine[1440]: I20250129 11:54:04.210506 1440 main.cc:92] Flatcar Update Engine starting Jan 29 11:54:04.217173 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:54:04.218158 update_engine[1440]: I20250129 11:54:04.218104 1440 update_check_scheduler.cc:74] Next update check in 4m44s Jan 29 11:54:04.228999 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:54:04.232104 tar[1448]: linux-amd64/LICENSE Jan 29 11:54:04.232618 tar[1448]: linux-amd64/helm Jan 29 11:54:04.234833 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:54:04.274819 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:54:04.274819 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:54:04.274819 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:54:04.285384 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Jan 29 11:54:04.275833 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:54:04.277603 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:54:04.279870 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:54:04.279895 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:54:04.282027 systemd-logind[1438]: New seat seat0. Jan 29 11:54:04.294970 bash[1479]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:54:04.292496 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:54:04.294053 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:54:04.296712 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:54:04.321221 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:54:04.519188 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:54:04.561440 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:54:04.576453 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:54:04.586713 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:54:04.587110 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:54:04.591222 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:54:04.627531 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:54:04.639529 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:54:04.642706 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:54:04.644046 containerd[1460]: time="2025-01-29T11:54:04.643707623Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:54:04.644380 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:54:04.676329 containerd[1460]: time="2025-01-29T11:54:04.676254398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:04.678721 containerd[1460]: time="2025-01-29T11:54:04.678585910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:04.678721 containerd[1460]: time="2025-01-29T11:54:04.678641484Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:54:04.678721 containerd[1460]: time="2025-01-29T11:54:04.678664137Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:54:04.679015 containerd[1460]: time="2025-01-29T11:54:04.678908615Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:54:04.679015 containerd[1460]: time="2025-01-29T11:54:04.678930777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:04.679015 containerd[1460]: time="2025-01-29T11:54:04.679013181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:04.679120 containerd[1460]: time="2025-01-29T11:54:04.679026636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:04.679295 containerd[1460]: time="2025-01-29T11:54:04.679235388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:04.679295 containerd[1460]: time="2025-01-29T11:54:04.679254904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:04.679295 containerd[1460]: time="2025-01-29T11:54:04.679267558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:04.679295 containerd[1460]: time="2025-01-29T11:54:04.679276605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:04.680695 containerd[1460]: time="2025-01-29T11:54:04.679368467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:04.680695 containerd[1460]: time="2025-01-29T11:54:04.679617214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:04.680695 containerd[1460]: time="2025-01-29T11:54:04.679743340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:04.680695 containerd[1460]: time="2025-01-29T11:54:04.679756375Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:54:04.680695 containerd[1460]: time="2025-01-29T11:54:04.679872723Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:54:04.680695 containerd[1460]: time="2025-01-29T11:54:04.679926123Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.691219237Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.691503110Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.691592157Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.691623205Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.691755152Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.692268535Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.692821341Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.693012580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.693030504Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.693046303Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.693061772Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.693076119Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.693090536Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:54:04.693831 containerd[1460]: time="2025-01-29T11:54:04.693106656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693125792Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693142073Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693172470Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693189892Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693213036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693227603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693240668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693252961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693266246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693279240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693299648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693313725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693326248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.694684 containerd[1460]: time="2025-01-29T11:54:04.693344122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693356174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693368768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693382484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693398083Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693460620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693474586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693485627Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693556530Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693585975Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693597837Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693613948Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693627303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693694939Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:54:04.695417 containerd[1460]: time="2025-01-29T11:54:04.693716871Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:54:04.695947 containerd[1460]: time="2025-01-29T11:54:04.693734474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:54:04.698528 containerd[1460]: time="2025-01-29T11:54:04.698262443Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:54:04.699234 containerd[1460]: time="2025-01-29T11:54:04.698641504Z" level=info msg="Connect containerd service" Jan 29 11:54:04.699234 containerd[1460]: time="2025-01-29T11:54:04.699046333Z" level=info msg="using legacy CRI server" Jan 29 11:54:04.699234 containerd[1460]: time="2025-01-29T11:54:04.699072482Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:54:04.700038 containerd[1460]: time="2025-01-29T11:54:04.699970526Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:54:04.701823 containerd[1460]: time="2025-01-29T11:54:04.701637953Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:54:04.702604 containerd[1460]: time="2025-01-29T11:54:04.702404650Z" level=info msg="Start subscribing containerd event" Jan 29 11:54:04.702839 containerd[1460]: time="2025-01-29T11:54:04.702700845Z" level=info msg="Start recovering state" Jan 29 11:54:04.703326 containerd[1460]: time="2025-01-29T11:54:04.703195072Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:54:04.703326 containerd[1460]: time="2025-01-29T11:54:04.703198368Z" level=info msg="Start event monitor" Jan 29 11:54:04.703326 containerd[1460]: time="2025-01-29T11:54:04.703352026Z" level=info msg="Start snapshots syncer" Jan 29 11:54:04.703778 containerd[1460]: time="2025-01-29T11:54:04.703401108Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:54:04.703778 containerd[1460]: time="2025-01-29T11:54:04.703433830Z" level=info msg="Start streaming server" Jan 29 11:54:04.703778 containerd[1460]: time="2025-01-29T11:54:04.703501246Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:54:04.704089 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:54:04.704766 containerd[1460]: time="2025-01-29T11:54:04.704698201Z" level=info msg="containerd successfully booted in 0.062127s" Jan 29 11:54:04.848976 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 29 11:54:04.853048 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:54:04.855532 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:54:04.864155 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:54:04.867489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:04.873876 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:54:04.901894 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:54:04.902230 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:54:04.904336 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:54:04.907967 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:54:04.985840 tar[1448]: linux-amd64/README.md Jan 29 11:54:05.002322 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:54:06.613416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:06.615279 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:54:06.618339 systemd[1]: Startup finished in 1.193s (kernel) + 6.516s (initrd) + 5.718s (userspace) = 13.428s. Jan 29 11:54:06.619657 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:54:07.410075 kubelet[1538]: E0129 11:54:07.409950 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:54:07.414579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:54:07.414819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:54:07.415239 systemd[1]: kubelet.service: Consumed 2.396s CPU time. Jan 29 11:54:07.681776 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:54:07.683204 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:50694.service - OpenSSH per-connection server daemon (10.0.0.1:50694). Jan 29 11:54:07.734315 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 50694 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:07.736769 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:07.746942 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:54:07.761090 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:54:07.763615 systemd-logind[1438]: New session 1 of user core. Jan 29 11:54:07.776477 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:54:07.779812 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:54:07.789033 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:54:07.920308 systemd[1555]: Queued start job for default target default.target. Jan 29 11:54:07.930367 systemd[1555]: Created slice app.slice - User Application Slice. Jan 29 11:54:07.930400 systemd[1555]: Reached target paths.target - Paths. Jan 29 11:54:07.930420 systemd[1555]: Reached target timers.target - Timers. Jan 29 11:54:07.932393 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:54:07.979689 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:54:07.979908 systemd[1555]: Reached target sockets.target - Sockets. Jan 29 11:54:07.979941 systemd[1555]: Reached target basic.target - Basic System. Jan 29 11:54:07.979998 systemd[1555]: Reached target default.target - Main User Target. Jan 29 11:54:07.980042 systemd[1555]: Startup finished in 183ms. Jan 29 11:54:07.980890 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:54:07.995167 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:54:08.063213 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:50708.service - OpenSSH per-connection server daemon (10.0.0.1:50708). Jan 29 11:54:08.098371 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 50708 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:08.100121 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:08.104654 systemd-logind[1438]: New session 2 of user core. Jan 29 11:54:08.119178 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:54:08.176016 sshd[1566]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:08.189858 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:50708.service: Deactivated successfully. Jan 29 11:54:08.192422 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:54:08.194155 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:54:08.205274 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:50714.service - OpenSSH per-connection server daemon (10.0.0.1:50714). Jan 29 11:54:08.206721 systemd-logind[1438]: Removed session 2. Jan 29 11:54:08.238952 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 50714 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:08.240648 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:08.246115 systemd-logind[1438]: New session 3 of user core. Jan 29 11:54:08.256045 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:54:08.310448 sshd[1573]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:08.320105 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:50714.service: Deactivated successfully. Jan 29 11:54:08.322757 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:54:08.324536 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:54:08.326176 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:50724.service - OpenSSH per-connection server daemon (10.0.0.1:50724). Jan 29 11:54:08.327341 systemd-logind[1438]: Removed session 3. Jan 29 11:54:08.368125 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 50724 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:08.369953 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:08.374368 systemd-logind[1438]: New session 4 of user core. Jan 29 11:54:08.383964 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:54:08.442968 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:08.460620 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:50724.service: Deactivated successfully. Jan 29 11:54:08.463336 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:54:08.465439 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:54:08.476137 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:50738.service - OpenSSH per-connection server daemon (10.0.0.1:50738). Jan 29 11:54:08.477506 systemd-logind[1438]: Removed session 4. Jan 29 11:54:08.513037 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 50738 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:08.514893 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:08.520579 systemd-logind[1438]: New session 5 of user core. Jan 29 11:54:08.536069 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:54:08.598515 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:54:08.599055 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:54:08.616458 sudo[1590]: pam_unix(sudo:session): session closed for user root Jan 29 11:54:08.618689 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:08.631445 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:50738.service: Deactivated successfully. Jan 29 11:54:08.633703 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:54:08.635449 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:54:08.636992 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:50754.service - OpenSSH per-connection server daemon (10.0.0.1:50754). Jan 29 11:54:08.637753 systemd-logind[1438]: Removed session 5. Jan 29 11:54:08.676214 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 50754 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:08.678219 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:08.682538 systemd-logind[1438]: New session 6 of user core. Jan 29 11:54:08.692059 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:54:08.748140 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:54:08.748505 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:54:08.752959 sudo[1599]: pam_unix(sudo:session): session closed for user root Jan 29 11:54:08.760250 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:54:08.760614 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:54:08.780162 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:54:08.781958 auditctl[1602]: No rules Jan 29 11:54:08.782422 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:54:08.782695 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:54:08.785891 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:54:08.822862 augenrules[1620]: No rules Jan 29 11:54:08.824821 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:54:08.826228 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 29 11:54:08.828454 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:08.843302 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:50754.service: Deactivated successfully. Jan 29 11:54:08.845378 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:54:08.847051 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:54:08.861236 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:50756.service - OpenSSH per-connection server daemon (10.0.0.1:50756). Jan 29 11:54:08.862953 systemd-logind[1438]: Removed session 6. Jan 29 11:54:08.898688 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 50756 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:08.900557 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:08.905057 systemd-logind[1438]: New session 7 of user core. Jan 29 11:54:08.916956 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:54:08.975432 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:54:08.975883 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:54:09.441094 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:54:09.441230 (dockerd)[1649]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:54:10.074424 dockerd[1649]: time="2025-01-29T11:54:10.074341550Z" level=info msg="Starting up" Jan 29 11:54:10.830613 dockerd[1649]: time="2025-01-29T11:54:10.830506870Z" level=info msg="Loading containers: start." Jan 29 11:54:11.182932 kernel: Initializing XFRM netlink socket Jan 29 11:54:11.285144 systemd-networkd[1376]: docker0: Link UP Jan 29 11:54:11.498114 dockerd[1649]: time="2025-01-29T11:54:11.498003065Z" level=info msg="Loading containers: done." Jan 29 11:54:11.525518 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1959576550-merged.mount: Deactivated successfully. Jan 29 11:54:11.628768 dockerd[1649]: time="2025-01-29T11:54:11.628572894Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:54:11.628768 dockerd[1649]: time="2025-01-29T11:54:11.628736831Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:54:11.628984 dockerd[1649]: time="2025-01-29T11:54:11.628946073Z" level=info msg="Daemon has completed initialization" Jan 29 11:54:11.817759 dockerd[1649]: time="2025-01-29T11:54:11.817660824Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:54:11.818671 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:54:12.634670 containerd[1460]: time="2025-01-29T11:54:12.634590637Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 11:54:13.798583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462403924.mount: Deactivated successfully. Jan 29 11:54:15.491667 containerd[1460]: time="2025-01-29T11:54:15.491598927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:15.521164 containerd[1460]: time="2025-01-29T11:54:15.521095984Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 29 11:54:15.538136 containerd[1460]: time="2025-01-29T11:54:15.538059981Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:15.562255 containerd[1460]: time="2025-01-29T11:54:15.562170558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:15.563558 containerd[1460]: time="2025-01-29T11:54:15.563507645Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 2.928830106s" Jan 29 11:54:15.563558 containerd[1460]: time="2025-01-29T11:54:15.563541839Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 29 11:54:15.564191 containerd[1460]: time="2025-01-29T11:54:15.564169597Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 11:54:17.201140 containerd[1460]: time="2025-01-29T11:54:17.201043715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:17.238936 containerd[1460]: time="2025-01-29T11:54:17.238825977Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 29 11:54:17.274846 containerd[1460]: time="2025-01-29T11:54:17.274768128Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:17.296204 containerd[1460]: time="2025-01-29T11:54:17.296153746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:17.354812 containerd[1460]: time="2025-01-29T11:54:17.354713025Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.790425647s" Jan 29 11:54:17.354812 containerd[1460]: time="2025-01-29T11:54:17.354768739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 29 11:54:17.355441 containerd[1460]: time="2025-01-29T11:54:17.355412997Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 11:54:17.665359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:54:17.678074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:18.456826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:18.461818 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:54:18.583650 kubelet[1864]: E0129 11:54:18.583533 1864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:54:18.591581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:54:18.591851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:54:19.604145 containerd[1460]: time="2025-01-29T11:54:19.603997583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:19.605103 containerd[1460]: time="2025-01-29T11:54:19.604986938Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 29 11:54:19.607186 containerd[1460]: time="2025-01-29T11:54:19.607119356Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:19.611293 containerd[1460]: time="2025-01-29T11:54:19.611218873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:19.612874 containerd[1460]: time="2025-01-29T11:54:19.612827289Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 2.257383304s" Jan 29 11:54:19.612874 containerd[1460]: time="2025-01-29T11:54:19.612871542Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 29 11:54:19.613708 containerd[1460]: time="2025-01-29T11:54:19.613579259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:54:21.151914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329764687.mount: Deactivated successfully. Jan 29 11:54:21.926950 containerd[1460]: time="2025-01-29T11:54:21.926735275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:21.927856 containerd[1460]: time="2025-01-29T11:54:21.927729239Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 29 11:54:21.929280 containerd[1460]: time="2025-01-29T11:54:21.929233520Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:21.931552 containerd[1460]: time="2025-01-29T11:54:21.931498817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:21.932432 containerd[1460]: time="2025-01-29T11:54:21.932389437Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.318744395s" Jan 29 11:54:21.932506 containerd[1460]: time="2025-01-29T11:54:21.932435323Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 11:54:21.933224 containerd[1460]: time="2025-01-29T11:54:21.933139744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 11:54:22.570777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3109526875.mount: Deactivated successfully. Jan 29 11:54:24.182646 containerd[1460]: time="2025-01-29T11:54:24.182558273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:24.183771 containerd[1460]: time="2025-01-29T11:54:24.183700525Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 29 11:54:24.185646 containerd[1460]: time="2025-01-29T11:54:24.185603132Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:24.189384 containerd[1460]: time="2025-01-29T11:54:24.189343906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:24.190456 containerd[1460]: time="2025-01-29T11:54:24.190398193Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.257212553s" Jan 29 11:54:24.190456 containerd[1460]: time="2025-01-29T11:54:24.190448938Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 29 11:54:24.191145 containerd[1460]: time="2025-01-29T11:54:24.190975295Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:54:24.883762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312078069.mount: Deactivated successfully. Jan 29 11:54:24.890137 containerd[1460]: time="2025-01-29T11:54:24.890056262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:24.890979 containerd[1460]: time="2025-01-29T11:54:24.890905845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:54:24.892637 containerd[1460]: time="2025-01-29T11:54:24.892604750Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:24.898225 containerd[1460]: time="2025-01-29T11:54:24.898152582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:24.899288 containerd[1460]: time="2025-01-29T11:54:24.899245091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 708.237115ms" Jan 29 11:54:24.899365 containerd[1460]: time="2025-01-29T11:54:24.899293972Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:54:24.900000 containerd[1460]: time="2025-01-29T11:54:24.899959811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 11:54:25.652959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751200291.mount: Deactivated successfully. Jan 29 11:54:28.150687 containerd[1460]: time="2025-01-29T11:54:28.150587958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:28.151609 containerd[1460]: time="2025-01-29T11:54:28.151546074Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 29 11:54:28.153370 containerd[1460]: time="2025-01-29T11:54:28.153275527Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:28.156487 containerd[1460]: time="2025-01-29T11:54:28.156418991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:28.157865 containerd[1460]: time="2025-01-29T11:54:28.157819677Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.257813069s" Jan 29 11:54:28.157865 containerd[1460]: time="2025-01-29T11:54:28.157854583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 29 11:54:28.842142 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:54:28.848967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:29.019982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:29.026015 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:54:29.185982 kubelet[2029]: E0129 11:54:29.185763 2029 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:54:29.190520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:54:29.190821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:54:30.753049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:30.763297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:30.795772 systemd[1]: Reloading requested from client PID 2044 ('systemctl') (unit session-7.scope)... Jan 29 11:54:30.795809 systemd[1]: Reloading... Jan 29 11:54:30.890939 zram_generator::config[2083]: No configuration found. Jan 29 11:54:31.942268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:54:32.031703 systemd[1]: Reloading finished in 1235 ms. Jan 29 11:54:32.080982 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:54:32.081085 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:54:32.081513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:32.084513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:32.253597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:32.258432 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:54:32.312138 kubelet[2132]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:54:32.312138 kubelet[2132]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:54:32.312138 kubelet[2132]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:54:32.312600 kubelet[2132]: I0129 11:54:32.312200 2132 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:54:32.753300 kubelet[2132]: I0129 11:54:32.753037 2132 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:54:32.753300 kubelet[2132]: I0129 11:54:32.753091 2132 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:54:32.753503 kubelet[2132]: I0129 11:54:32.753448 2132 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:54:32.809477 kubelet[2132]: I0129 11:54:32.809412 2132 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:54:32.824145 kubelet[2132]: E0129 11:54:32.824096 2132 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:32.839991 kubelet[2132]: E0129 11:54:32.839928 2132 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:54:32.839991 kubelet[2132]: I0129 11:54:32.839970 2132 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:54:32.854616 kubelet[2132]: I0129 11:54:32.854548 2132 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:54:32.859817 kubelet[2132]: I0129 11:54:32.859719 2132 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:54:32.860015 kubelet[2132]: I0129 11:54:32.859809 2132 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:54:32.860143 kubelet[2132]: I0129 11:54:32.860017 2132 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:54:32.860143 kubelet[2132]: I0129 11:54:32.860027 2132 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:54:32.860219 kubelet[2132]: I0129 11:54:32.860202 2132 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:54:32.890273 kubelet[2132]: I0129 11:54:32.890198 2132 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:54:32.890273 kubelet[2132]: I0129 11:54:32.890270 2132 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:54:32.890471 kubelet[2132]: I0129 11:54:32.890312 2132 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:54:32.890471 kubelet[2132]: I0129 11:54:32.890326 2132 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:54:32.901475 kubelet[2132]: I0129 11:54:32.901418 2132 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:54:32.901942 kubelet[2132]: I0129 11:54:32.901910 2132 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:54:32.902872 kubelet[2132]: W0129 11:54:32.902828 2132 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:54:32.908550 kubelet[2132]: W0129 11:54:32.908452 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:32.908550 kubelet[2132]: E0129 11:54:32.908533 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:32.908816 kubelet[2132]: W0129 11:54:32.908739 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:32.908872 kubelet[2132]: E0129 11:54:32.908817 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:32.913283 kubelet[2132]: I0129 11:54:32.913237 2132 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:54:32.913283 kubelet[2132]: I0129 11:54:32.913290 2132 server.go:1287] "Started kubelet" Jan 29 11:54:32.916277 kubelet[2132]: I0129 11:54:32.916207 2132 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:54:32.916277 kubelet[2132]: I0129 11:54:32.916210 2132 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:54:32.916743 kubelet[2132]: I0129 11:54:32.916699 2132 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:54:32.917549 kubelet[2132]: I0129 11:54:32.917527 2132 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:54:32.918556 kubelet[2132]: I0129 11:54:32.918524 2132 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:54:32.918831 kubelet[2132]: I0129 11:54:32.918803 2132 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:54:32.921357 kubelet[2132]: E0129 11:54:32.921119 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:32.921357 kubelet[2132]: I0129 11:54:32.921157 2132 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:54:32.921479 kubelet[2132]: I0129 11:54:32.921376 2132 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:54:32.921479 kubelet[2132]: I0129 11:54:32.921464 2132 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:54:32.921967 kubelet[2132]: W0129 11:54:32.921923 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:32.922051 kubelet[2132]: E0129 11:54:32.921979 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:32.922233 kubelet[2132]: I0129 11:54:32.922200 2132 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:54:32.922328 kubelet[2132]: I0129 11:54:32.922305 2132 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:54:32.923465 kubelet[2132]: E0129 11:54:32.923431 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" Jan 29 11:54:32.923618 kubelet[2132]: I0129 11:54:32.923596 2132 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:54:32.934822 kubelet[2132]: E0129 11:54:32.934732 2132 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:54:32.938708 kubelet[2132]: E0129 11:54:32.936917 2132 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f27bcc5beeb15 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:54:32.913259285 +0000 UTC m=+0.650424507,LastTimestamp:2025-01-29 11:54:32.913259285 +0000 UTC m=+0.650424507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:54:32.943004 kubelet[2132]: I0129 11:54:32.942927 2132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:54:32.945144 kubelet[2132]: I0129 11:54:32.945104 2132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:54:32.945144 kubelet[2132]: I0129 11:54:32.945139 2132 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:54:32.945256 kubelet[2132]: I0129 11:54:32.945178 2132 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:54:32.945256 kubelet[2132]: I0129 11:54:32.945187 2132 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:54:32.945330 kubelet[2132]: E0129 11:54:32.945250 2132 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:54:32.950904 kubelet[2132]: W0129 11:54:32.950831 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:32.951137 kubelet[2132]: E0129 11:54:32.951093 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:32.957877 kubelet[2132]: I0129 11:54:32.957822 2132 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:54:32.957877 kubelet[2132]: I0129 11:54:32.957857 2132 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:54:32.957877 kubelet[2132]: I0129 11:54:32.957885 2132 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:54:33.022382 kubelet[2132]: E0129 11:54:33.022153 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.045593 kubelet[2132]: E0129 11:54:33.045530 2132 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:54:33.123138 kubelet[2132]: E0129 11:54:33.123082 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.124634 kubelet[2132]: E0129 11:54:33.124603 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" Jan 29 11:54:33.224123 kubelet[2132]: E0129 11:54:33.224055 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.246387 kubelet[2132]: E0129 11:54:33.246298 2132 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:54:33.325007 kubelet[2132]: E0129 11:54:33.324857 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.425052 kubelet[2132]: E0129 11:54:33.424966 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.525770 kubelet[2132]: E0129 11:54:33.525698 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.526295 kubelet[2132]: E0129 11:54:33.526230 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" Jan 29 11:54:33.626940 kubelet[2132]: E0129 11:54:33.626745 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.647060 kubelet[2132]: E0129 11:54:33.646978 2132 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:54:33.727686 kubelet[2132]: E0129 11:54:33.727596 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.828752 kubelet[2132]: E0129 11:54:33.828650 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.850876 kubelet[2132]: W0129 11:54:33.850717 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:33.850876 kubelet[2132]: E0129 11:54:33.850859 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:33.914467 kubelet[2132]: I0129 11:54:33.914232 2132 policy_none.go:49] "None policy: Start" Jan 29 11:54:33.914467 kubelet[2132]: I0129 11:54:33.914299 2132 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:54:33.914467 kubelet[2132]: I0129 11:54:33.914348 2132 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:54:33.926214 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:54:33.929152 kubelet[2132]: E0129 11:54:33.929114 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:33.942217 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:54:33.946057 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:54:33.957125 kubelet[2132]: I0129 11:54:33.957090 2132 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:54:33.957633 kubelet[2132]: I0129 11:54:33.957373 2132 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:54:33.957633 kubelet[2132]: I0129 11:54:33.957403 2132 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:54:33.957733 kubelet[2132]: I0129 11:54:33.957677 2132 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:54:33.958503 kubelet[2132]: E0129 11:54:33.958477 2132 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:54:33.958555 kubelet[2132]: E0129 11:54:33.958533 2132 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:54:34.059780 kubelet[2132]: I0129 11:54:34.059706 2132 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:54:34.060185 kubelet[2132]: E0129 11:54:34.060147 2132 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Jan 29 11:54:34.209088 kubelet[2132]: W0129 11:54:34.208910 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:34.209088 kubelet[2132]: E0129 11:54:34.208981 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:34.262738 kubelet[2132]: I0129 11:54:34.262677 2132 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:54:34.263252 kubelet[2132]: E0129 11:54:34.263200 2132 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Jan 29 11:54:34.327922 kubelet[2132]: E0129 11:54:34.327859 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="1.6s" Jan 29 11:54:34.460985 systemd[1]: Created slice kubepods-burstable-pod54d9cab4b2303049810e3f81a6e73032.slice - libcontainer container kubepods-burstable-pod54d9cab4b2303049810e3f81a6e73032.slice. Jan 29 11:54:34.483245 kubelet[2132]: E0129 11:54:34.483198 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:34.485557 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 11:54:34.490252 kubelet[2132]: W0129 11:54:34.490174 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:34.490403 kubelet[2132]: E0129 11:54:34.490254 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:34.493249 kubelet[2132]: E0129 11:54:34.493224 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:34.496311 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 11:54:34.498919 kubelet[2132]: E0129 11:54:34.498890 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:34.516823 kubelet[2132]: W0129 11:54:34.516697 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:34.516823 kubelet[2132]: E0129 11:54:34.516825 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:34.533851 kubelet[2132]: I0129 11:54:34.533738 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54d9cab4b2303049810e3f81a6e73032-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"54d9cab4b2303049810e3f81a6e73032\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:34.533851 kubelet[2132]: I0129 11:54:34.533829 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54d9cab4b2303049810e3f81a6e73032-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"54d9cab4b2303049810e3f81a6e73032\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:34.534070 kubelet[2132]: I0129 11:54:34.533881 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:34.534070 kubelet[2132]: I0129 11:54:34.533906 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:34.534070 kubelet[2132]: I0129 11:54:34.533957 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:34.534070 kubelet[2132]: I0129 11:54:34.534048 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54d9cab4b2303049810e3f81a6e73032-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"54d9cab4b2303049810e3f81a6e73032\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:34.534221 kubelet[2132]: I0129 11:54:34.534106 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:34.534221 kubelet[2132]: I0129 11:54:34.534132 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:34.534221 kubelet[2132]: I0129 11:54:34.534154 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:34.664824 kubelet[2132]: I0129 11:54:34.664770 2132 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:54:34.665215 kubelet[2132]: E0129 11:54:34.665179 2132 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Jan 29 11:54:34.784239 kubelet[2132]: E0129 11:54:34.784105 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:34.784892 containerd[1460]: time="2025-01-29T11:54:34.784843034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:54d9cab4b2303049810e3f81a6e73032,Namespace:kube-system,Attempt:0,}" Jan 29 11:54:34.794067 kubelet[2132]: E0129 11:54:34.794026 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:34.794757 containerd[1460]: time="2025-01-29T11:54:34.794684387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 11:54:34.800199 kubelet[2132]: E0129 11:54:34.800134 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:34.801066 containerd[1460]: time="2025-01-29T11:54:34.801002824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 11:54:34.968525 kubelet[2132]: E0129 11:54:34.968456 2132 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:35.467137 kubelet[2132]: I0129 11:54:35.467085 2132 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:54:35.467623 kubelet[2132]: E0129 11:54:35.467576 2132 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Jan 29 11:54:35.499473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495985591.mount: Deactivated successfully. Jan 29 11:54:35.507813 containerd[1460]: time="2025-01-29T11:54:35.507674854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:35.509759 containerd[1460]: time="2025-01-29T11:54:35.509718936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:54:35.510908 containerd[1460]: time="2025-01-29T11:54:35.510862350Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:35.512042 containerd[1460]: time="2025-01-29T11:54:35.511995274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:35.513013 containerd[1460]: time="2025-01-29T11:54:35.512983307Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:35.514059 containerd[1460]: time="2025-01-29T11:54:35.513902581Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:54:35.515281 containerd[1460]: time="2025-01-29T11:54:35.515236993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:54:35.516867 containerd[1460]: time="2025-01-29T11:54:35.516826783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:35.520175 containerd[1460]: time="2025-01-29T11:54:35.520125509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.011146ms" Jan 29 11:54:35.521978 containerd[1460]: time="2025-01-29T11:54:35.521931004Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 736.984145ms" Jan 29 11:54:35.525868 containerd[1460]: time="2025-01-29T11:54:35.525836186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 731.006677ms" Jan 29 11:54:35.612002 kubelet[2132]: W0129 11:54:35.611936 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:35.612180 kubelet[2132]: E0129 11:54:35.612012 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:35.619474 containerd[1460]: time="2025-01-29T11:54:35.619247633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:35.619474 containerd[1460]: time="2025-01-29T11:54:35.619344254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:35.619474 containerd[1460]: time="2025-01-29T11:54:35.619362207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:35.619693 containerd[1460]: time="2025-01-29T11:54:35.619491360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:35.644044 systemd[1]: Started cri-containerd-e52f6d4711ced48fde8ad0349b981fd49e2c5a687c1300d39f66ed7c1e83ae36.scope - libcontainer container e52f6d4711ced48fde8ad0349b981fd49e2c5a687c1300d39f66ed7c1e83ae36. Jan 29 11:54:35.654498 containerd[1460]: time="2025-01-29T11:54:35.654386247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:35.654498 containerd[1460]: time="2025-01-29T11:54:35.654444326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:35.654498 containerd[1460]: time="2025-01-29T11:54:35.654460346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:35.654711 containerd[1460]: time="2025-01-29T11:54:35.654536309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:35.657392 containerd[1460]: time="2025-01-29T11:54:35.657098403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:35.657392 containerd[1460]: time="2025-01-29T11:54:35.657164336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:35.657392 containerd[1460]: time="2025-01-29T11:54:35.657183342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:35.657392 containerd[1460]: time="2025-01-29T11:54:35.657291855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:35.681043 systemd[1]: Started cri-containerd-13677c4e648465fb21ce67c8465e10bd9a2bbe7d831bf71bfe244ce619169e28.scope - libcontainer container 13677c4e648465fb21ce67c8465e10bd9a2bbe7d831bf71bfe244ce619169e28. Jan 29 11:54:35.684015 systemd[1]: Started cri-containerd-96a604142a7eebda8c51aad21cb3a4e0645f957b90ede045ff65a8ca29c8fa65.scope - libcontainer container 96a604142a7eebda8c51aad21cb3a4e0645f957b90ede045ff65a8ca29c8fa65. Jan 29 11:54:35.694670 containerd[1460]: time="2025-01-29T11:54:35.694620958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"e52f6d4711ced48fde8ad0349b981fd49e2c5a687c1300d39f66ed7c1e83ae36\"" Jan 29 11:54:35.696423 kubelet[2132]: E0129 11:54:35.696397 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:35.699004 containerd[1460]: time="2025-01-29T11:54:35.698930187Z" level=info msg="CreateContainer within sandbox \"e52f6d4711ced48fde8ad0349b981fd49e2c5a687c1300d39f66ed7c1e83ae36\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:54:35.725176 containerd[1460]: time="2025-01-29T11:54:35.725005999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"96a604142a7eebda8c51aad21cb3a4e0645f957b90ede045ff65a8ca29c8fa65\"" Jan 29 11:54:35.726310 kubelet[2132]: E0129 11:54:35.726211 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:35.729144 containerd[1460]: time="2025-01-29T11:54:35.729111177Z" level=info msg="CreateContainer within sandbox \"96a604142a7eebda8c51aad21cb3a4e0645f957b90ede045ff65a8ca29c8fa65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:54:35.732243 containerd[1460]: time="2025-01-29T11:54:35.732190180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:54d9cab4b2303049810e3f81a6e73032,Namespace:kube-system,Attempt:0,} returns sandbox id \"13677c4e648465fb21ce67c8465e10bd9a2bbe7d831bf71bfe244ce619169e28\"" Jan 29 11:54:35.732783 kubelet[2132]: E0129 11:54:35.732760 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:35.734375 containerd[1460]: time="2025-01-29T11:54:35.734285559Z" level=info msg="CreateContainer within sandbox \"13677c4e648465fb21ce67c8465e10bd9a2bbe7d831bf71bfe244ce619169e28\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:54:35.928785 kubelet[2132]: E0129 11:54:35.928699 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="3.2s" Jan 29 11:54:36.009350 kubelet[2132]: W0129 11:54:36.009141 2132 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Jan 29 11:54:36.009350 kubelet[2132]: E0129 11:54:36.009216 2132 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:36.147695 containerd[1460]: time="2025-01-29T11:54:36.147597028Z" level=info msg="CreateContainer within sandbox \"96a604142a7eebda8c51aad21cb3a4e0645f957b90ede045ff65a8ca29c8fa65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5521a4551c97e03efd743473a86c68c2f6343eec3f2a13e73c6629236ee4bef6\"" Jan 29 11:54:36.148645 containerd[1460]: time="2025-01-29T11:54:36.148592584Z" level=info msg="StartContainer for \"5521a4551c97e03efd743473a86c68c2f6343eec3f2a13e73c6629236ee4bef6\"" Jan 29 11:54:36.170230 containerd[1460]: time="2025-01-29T11:54:36.170030024Z" level=info msg="CreateContainer within sandbox \"13677c4e648465fb21ce67c8465e10bd9a2bbe7d831bf71bfe244ce619169e28\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5bb38c98332fca9f8a33577498ecb119ea94ff6f93c3ca80927555dba20239e1\"" Jan 29 11:54:36.170859 containerd[1460]: time="2025-01-29T11:54:36.170816250Z" level=info msg="StartContainer for \"5bb38c98332fca9f8a33577498ecb119ea94ff6f93c3ca80927555dba20239e1\"" Jan 29 11:54:36.173817 containerd[1460]: time="2025-01-29T11:54:36.173681738Z" level=info msg="CreateContainer within sandbox \"e52f6d4711ced48fde8ad0349b981fd49e2c5a687c1300d39f66ed7c1e83ae36\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"925141083c1ec8ee9e904c09800ca4c728da9343014ca83aa885a5c48e2c2bde\"" Jan 29 11:54:36.174298 containerd[1460]: time="2025-01-29T11:54:36.174218346Z" level=info msg="StartContainer for \"925141083c1ec8ee9e904c09800ca4c728da9343014ca83aa885a5c48e2c2bde\"" Jan 29 11:54:36.184161 systemd[1]: Started cri-containerd-5521a4551c97e03efd743473a86c68c2f6343eec3f2a13e73c6629236ee4bef6.scope - libcontainer container 5521a4551c97e03efd743473a86c68c2f6343eec3f2a13e73c6629236ee4bef6. Jan 29 11:54:36.205246 systemd[1]: Started cri-containerd-925141083c1ec8ee9e904c09800ca4c728da9343014ca83aa885a5c48e2c2bde.scope - libcontainer container 925141083c1ec8ee9e904c09800ca4c728da9343014ca83aa885a5c48e2c2bde. Jan 29 11:54:36.319031 systemd[1]: Started cri-containerd-5bb38c98332fca9f8a33577498ecb119ea94ff6f93c3ca80927555dba20239e1.scope - libcontainer container 5bb38c98332fca9f8a33577498ecb119ea94ff6f93c3ca80927555dba20239e1. Jan 29 11:54:36.349728 containerd[1460]: time="2025-01-29T11:54:36.349485989Z" level=info msg="StartContainer for \"5521a4551c97e03efd743473a86c68c2f6343eec3f2a13e73c6629236ee4bef6\" returns successfully" Jan 29 11:54:36.370566 containerd[1460]: time="2025-01-29T11:54:36.370388314Z" level=info msg="StartContainer for \"925141083c1ec8ee9e904c09800ca4c728da9343014ca83aa885a5c48e2c2bde\" returns successfully" Jan 29 11:54:36.382895 containerd[1460]: time="2025-01-29T11:54:36.382785679Z" level=info msg="StartContainer for \"5bb38c98332fca9f8a33577498ecb119ea94ff6f93c3ca80927555dba20239e1\" returns successfully" Jan 29 11:54:36.963583 kubelet[2132]: E0129 11:54:36.963539 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:36.964019 kubelet[2132]: E0129 11:54:36.963726 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:36.966395 kubelet[2132]: E0129 11:54:36.966104 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:36.966395 kubelet[2132]: E0129 11:54:36.966273 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:36.967869 kubelet[2132]: E0129 11:54:36.967826 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:36.968510 kubelet[2132]: E0129 11:54:36.968476 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:37.070249 kubelet[2132]: I0129 11:54:37.070201 2132 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:54:37.725216 kubelet[2132]: I0129 11:54:37.725153 2132 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:54:37.725216 kubelet[2132]: E0129 11:54:37.725200 2132 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:54:37.728618 kubelet[2132]: E0129 11:54:37.728579 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:37.829219 kubelet[2132]: E0129 11:54:37.829165 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:37.929680 kubelet[2132]: E0129 11:54:37.929608 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:37.969081 kubelet[2132]: E0129 11:54:37.969044 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:37.969508 kubelet[2132]: E0129 11:54:37.969159 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:37.969508 kubelet[2132]: E0129 11:54:37.969179 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:37.969508 kubelet[2132]: E0129 11:54:37.969258 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:37.969508 kubelet[2132]: E0129 11:54:37.969324 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:37.969508 kubelet[2132]: E0129 11:54:37.969472 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:38.030158 kubelet[2132]: E0129 11:54:38.030076 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.131116 kubelet[2132]: E0129 11:54:38.131019 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.231958 kubelet[2132]: E0129 11:54:38.231876 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.332391 kubelet[2132]: E0129 11:54:38.332202 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.432822 kubelet[2132]: E0129 11:54:38.432722 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.533841 kubelet[2132]: E0129 11:54:38.533734 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.634769 kubelet[2132]: E0129 11:54:38.634573 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.735729 kubelet[2132]: E0129 11:54:38.735653 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.836215 kubelet[2132]: E0129 11:54:38.836146 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.936607 kubelet[2132]: E0129 11:54:38.936434 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:38.970455 kubelet[2132]: E0129 11:54:38.970400 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:38.970915 kubelet[2132]: E0129 11:54:38.970568 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:38.970915 kubelet[2132]: E0129 11:54:38.970600 2132 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:54:38.970915 kubelet[2132]: E0129 11:54:38.970773 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:39.037345 kubelet[2132]: E0129 11:54:39.037256 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:39.137470 kubelet[2132]: E0129 11:54:39.137411 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:39.238739 kubelet[2132]: E0129 11:54:39.238473 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:39.339470 kubelet[2132]: E0129 11:54:39.339398 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:39.439734 kubelet[2132]: E0129 11:54:39.439680 2132 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:39.623379 kubelet[2132]: I0129 11:54:39.623290 2132 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:39.635923 kubelet[2132]: I0129 11:54:39.635875 2132 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:39.640104 kubelet[2132]: I0129 11:54:39.640079 2132 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:39.769571 kubelet[2132]: I0129 11:54:39.769515 2132 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:39.903163 kubelet[2132]: I0129 11:54:39.902986 2132 apiserver.go:52] "Watching apiserver" Jan 29 11:54:39.914583 kubelet[2132]: E0129 11:54:39.914525 2132 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:39.914797 kubelet[2132]: E0129 11:54:39.914762 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:39.921610 kubelet[2132]: I0129 11:54:39.921569 2132 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:54:39.970422 kubelet[2132]: I0129 11:54:39.970343 2132 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:39.970610 kubelet[2132]: E0129 11:54:39.970583 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:39.971041 kubelet[2132]: E0129 11:54:39.970633 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:40.094553 kubelet[2132]: E0129 11:54:40.094501 2132 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:40.094795 kubelet[2132]: E0129 11:54:40.094773 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:40.972625 kubelet[2132]: E0129 11:54:40.972589 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:40.973168 kubelet[2132]: E0129 11:54:40.972726 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:41.224149 systemd[1]: Reloading requested from client PID 2410 ('systemctl') (unit session-7.scope)... Jan 29 11:54:41.224165 systemd[1]: Reloading... Jan 29 11:54:41.296832 zram_generator::config[2450]: No configuration found. Jan 29 11:54:41.436661 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:54:41.533291 systemd[1]: Reloading finished in 308 ms. Jan 29 11:54:41.587420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:41.612493 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:54:41.612846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:41.612912 systemd[1]: kubelet.service: Consumed 1.168s CPU time, 127.8M memory peak, 0B memory swap peak. Jan 29 11:54:41.625244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:41.821705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:41.828510 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:54:41.876797 kubelet[2494]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:54:41.876797 kubelet[2494]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:54:41.876797 kubelet[2494]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:54:41.877230 kubelet[2494]: I0129 11:54:41.876867 2494 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:54:41.883625 kubelet[2494]: I0129 11:54:41.883587 2494 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:54:41.883625 kubelet[2494]: I0129 11:54:41.883614 2494 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:54:41.883923 kubelet[2494]: I0129 11:54:41.883903 2494 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:54:41.885143 kubelet[2494]: I0129 11:54:41.885121 2494 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:54:41.887488 kubelet[2494]: I0129 11:54:41.887453 2494 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:54:41.892103 kubelet[2494]: E0129 11:54:41.892041 2494 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:54:41.892103 kubelet[2494]: I0129 11:54:41.892077 2494 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:54:41.897048 kubelet[2494]: I0129 11:54:41.897001 2494 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:54:41.897276 kubelet[2494]: I0129 11:54:41.897239 2494 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:54:41.897434 kubelet[2494]: I0129 11:54:41.897271 2494 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:54:41.897536 kubelet[2494]: I0129 11:54:41.897436 2494 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:54:41.897536 kubelet[2494]: I0129 11:54:41.897445 2494 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:54:41.897536 kubelet[2494]: I0129 11:54:41.897488 2494 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:54:41.897639 kubelet[2494]: I0129 11:54:41.897626 2494 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:54:41.897664 kubelet[2494]: I0129 11:54:41.897641 2494 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:54:41.897664 kubelet[2494]: I0129 11:54:41.897657 2494 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:54:41.897713 kubelet[2494]: I0129 11:54:41.897667 2494 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:54:41.901813 kubelet[2494]: I0129 11:54:41.898976 2494 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:54:41.901813 kubelet[2494]: I0129 11:54:41.899644 2494 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:54:41.901813 kubelet[2494]: I0129 11:54:41.900210 2494 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:54:41.901813 kubelet[2494]: I0129 11:54:41.900245 2494 server.go:1287] "Started kubelet" Jan 29 11:54:41.901813 kubelet[2494]: I0129 11:54:41.901141 2494 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:54:41.902092 kubelet[2494]: I0129 11:54:41.902054 2494 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:54:41.903254 kubelet[2494]: I0129 11:54:41.903174 2494 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:54:41.904232 kubelet[2494]: I0129 11:54:41.903462 2494 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:54:41.905969 kubelet[2494]: I0129 11:54:41.904973 2494 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:54:41.907679 kubelet[2494]: I0129 11:54:41.907628 2494 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:54:41.914876 kubelet[2494]: I0129 11:54:41.914843 2494 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:54:41.916575 kubelet[2494]: I0129 11:54:41.914968 2494 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:54:41.917101 kubelet[2494]: I0129 11:54:41.917072 2494 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:54:41.917346 kubelet[2494]: E0129 11:54:41.917316 2494 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:54:41.917496 kubelet[2494]: I0129 11:54:41.917423 2494 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:54:41.917546 kubelet[2494]: I0129 11:54:41.917525 2494 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:54:41.919014 kubelet[2494]: I0129 11:54:41.918992 2494 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:54:41.923548 kubelet[2494]: I0129 11:54:41.923500 2494 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:54:41.925118 kubelet[2494]: I0129 11:54:41.924773 2494 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:54:41.925118 kubelet[2494]: I0129 11:54:41.924825 2494 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:54:41.925118 kubelet[2494]: I0129 11:54:41.924853 2494 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:54:41.925118 kubelet[2494]: I0129 11:54:41.924864 2494 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:54:41.925118 kubelet[2494]: E0129 11:54:41.924920 2494 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:54:41.954585 kubelet[2494]: I0129 11:54:41.954543 2494 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:54:41.954585 kubelet[2494]: I0129 11:54:41.954569 2494 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:54:41.954585 kubelet[2494]: I0129 11:54:41.954593 2494 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:54:41.954888 kubelet[2494]: I0129 11:54:41.954860 2494 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:54:41.954963 kubelet[2494]: I0129 11:54:41.954923 2494 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:54:41.954963 kubelet[2494]: I0129 11:54:41.954961 2494 policy_none.go:49] "None policy: Start" Jan 29 11:54:41.955089 kubelet[2494]: I0129 11:54:41.955051 2494 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:54:41.955089 kubelet[2494]: I0129 11:54:41.955089 2494 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:54:41.955339 kubelet[2494]: I0129 11:54:41.955212 2494 state_mem.go:75] "Updated machine memory state" Jan 29 11:54:41.959454 kubelet[2494]: I0129 11:54:41.959409 2494 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:54:41.959630 kubelet[2494]: I0129 11:54:41.959601 2494 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:54:41.959691 kubelet[2494]: I0129 11:54:41.959623 2494 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:54:41.960226 kubelet[2494]: I0129 11:54:41.959859 2494 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:54:41.961084 kubelet[2494]: E0129 11:54:41.961013 2494 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:54:42.026119 kubelet[2494]: I0129 11:54:42.026076 2494 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:42.026119 kubelet[2494]: I0129 11:54:42.026096 2494 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:42.026340 kubelet[2494]: I0129 11:54:42.026143 2494 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:42.031923 kubelet[2494]: E0129 11:54:42.031893 2494 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:42.032004 kubelet[2494]: E0129 11:54:42.031945 2494 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:42.032004 kubelet[2494]: E0129 11:54:42.031997 2494 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:42.068443 kubelet[2494]: I0129 11:54:42.068370 2494 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:54:42.118363 kubelet[2494]: I0129 11:54:42.118217 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54d9cab4b2303049810e3f81a6e73032-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"54d9cab4b2303049810e3f81a6e73032\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:42.118363 kubelet[2494]: I0129 11:54:42.118258 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54d9cab4b2303049810e3f81a6e73032-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"54d9cab4b2303049810e3f81a6e73032\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:42.118363 kubelet[2494]: I0129 11:54:42.118277 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:42.118363 kubelet[2494]: I0129 11:54:42.118315 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:42.118363 kubelet[2494]: I0129 11:54:42.118366 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:42.118606 kubelet[2494]: I0129 11:54:42.118388 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:42.118606 kubelet[2494]: I0129 11:54:42.118406 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:42.118606 kubelet[2494]: I0129 11:54:42.118440 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:42.118606 kubelet[2494]: I0129 11:54:42.118469 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54d9cab4b2303049810e3f81a6e73032-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"54d9cab4b2303049810e3f81a6e73032\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:42.129407 kubelet[2494]: I0129 11:54:42.129360 2494 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 11:54:42.129564 kubelet[2494]: I0129 11:54:42.129484 2494 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:54:42.332943 kubelet[2494]: E0129 11:54:42.332903 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:42.333103 kubelet[2494]: E0129 11:54:42.332909 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:42.333103 kubelet[2494]: E0129 11:54:42.333002 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:42.901827 kubelet[2494]: I0129 11:54:42.899675 2494 apiserver.go:52] "Watching apiserver" Jan 29 11:54:42.919073 kubelet[2494]: I0129 11:54:42.918983 2494 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:54:42.942182 kubelet[2494]: E0129 11:54:42.941450 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:42.942182 kubelet[2494]: I0129 11:54:42.941565 2494 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:42.942182 kubelet[2494]: E0129 11:54:42.942019 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:42.954493 kubelet[2494]: E0129 11:54:42.954441 2494 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:42.955000 kubelet[2494]: E0129 11:54:42.954980 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:42.980859 kubelet[2494]: I0129 11:54:42.980754 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.980727861 podStartE2EDuration="3.980727861s" podCreationTimestamp="2025-01-29 11:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:42.980124012 +0000 UTC m=+1.145801179" watchObservedRunningTime="2025-01-29 11:54:42.980727861 +0000 UTC m=+1.146405028" Jan 29 11:54:42.981096 kubelet[2494]: I0129 11:54:42.980926 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.980920968 podStartE2EDuration="3.980920968s" podCreationTimestamp="2025-01-29 11:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:42.965928407 +0000 UTC m=+1.131605574" watchObservedRunningTime="2025-01-29 11:54:42.980920968 +0000 UTC m=+1.146598135" Jan 29 11:54:43.945818 kubelet[2494]: E0129 11:54:43.942775 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:43.945818 kubelet[2494]: E0129 11:54:43.942998 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:44.116628 kubelet[2494]: E0129 11:54:44.116569 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:45.176738 kubelet[2494]: I0129 11:54:45.176683 2494 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:54:45.177455 kubelet[2494]: I0129 11:54:45.177309 2494 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:54:45.177499 containerd[1460]: time="2025-01-29T11:54:45.177080675Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:54:46.081277 kubelet[2494]: I0129 11:54:46.080250 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.080225102 podStartE2EDuration="7.080225102s" podCreationTimestamp="2025-01-29 11:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:42.992625925 +0000 UTC m=+1.158303112" watchObservedRunningTime="2025-01-29 11:54:46.080225102 +0000 UTC m=+4.245902269" Jan 29 11:54:46.093729 systemd[1]: Created slice kubepods-besteffort-pod2842d100_1dcc_40df_8c09_4dfa1b348bb5.slice - libcontainer container kubepods-besteffort-pod2842d100_1dcc_40df_8c09_4dfa1b348bb5.slice. Jan 29 11:54:46.140048 kubelet[2494]: I0129 11:54:46.139969 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2842d100-1dcc-40df-8c09-4dfa1b348bb5-kube-proxy\") pod \"kube-proxy-pjsb6\" (UID: \"2842d100-1dcc-40df-8c09-4dfa1b348bb5\") " pod="kube-system/kube-proxy-pjsb6" Jan 29 11:54:46.140048 kubelet[2494]: I0129 11:54:46.140018 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2842d100-1dcc-40df-8c09-4dfa1b348bb5-xtables-lock\") pod \"kube-proxy-pjsb6\" (UID: \"2842d100-1dcc-40df-8c09-4dfa1b348bb5\") " pod="kube-system/kube-proxy-pjsb6" Jan 29 11:54:46.140048 kubelet[2494]: I0129 11:54:46.140035 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2842d100-1dcc-40df-8c09-4dfa1b348bb5-lib-modules\") pod \"kube-proxy-pjsb6\" (UID: \"2842d100-1dcc-40df-8c09-4dfa1b348bb5\") " pod="kube-system/kube-proxy-pjsb6" Jan 29 11:54:46.140048 kubelet[2494]: I0129 11:54:46.140050 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tsf6\" (UniqueName: \"kubernetes.io/projected/2842d100-1dcc-40df-8c09-4dfa1b348bb5-kube-api-access-8tsf6\") pod \"kube-proxy-pjsb6\" (UID: \"2842d100-1dcc-40df-8c09-4dfa1b348bb5\") " pod="kube-system/kube-proxy-pjsb6" Jan 29 11:54:46.346143 kubelet[2494]: E0129 11:54:46.345995 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:46.406537 kubelet[2494]: E0129 11:54:46.406112 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:46.407073 containerd[1460]: time="2025-01-29T11:54:46.407013169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjsb6,Uid:2842d100-1dcc-40df-8c09-4dfa1b348bb5,Namespace:kube-system,Attempt:0,}" Jan 29 11:54:46.618441 systemd[1]: Created slice kubepods-besteffort-pod9c2e1b9f_fa38_413a_a5f0_ccd22045501f.slice - libcontainer container kubepods-besteffort-pod9c2e1b9f_fa38_413a_a5f0_ccd22045501f.slice. Jan 29 11:54:46.730741 kubelet[2494]: I0129 11:54:46.642760 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28sfx\" (UniqueName: \"kubernetes.io/projected/9c2e1b9f-fa38-413a-a5f0-ccd22045501f-kube-api-access-28sfx\") pod \"tigera-operator-7d68577dc5-8b8xw\" (UID: \"9c2e1b9f-fa38-413a-a5f0-ccd22045501f\") " pod="tigera-operator/tigera-operator-7d68577dc5-8b8xw" Jan 29 11:54:46.730741 kubelet[2494]: I0129 11:54:46.642822 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c2e1b9f-fa38-413a-a5f0-ccd22045501f-var-lib-calico\") pod \"tigera-operator-7d68577dc5-8b8xw\" (UID: \"9c2e1b9f-fa38-413a-a5f0-ccd22045501f\") " pod="tigera-operator/tigera-operator-7d68577dc5-8b8xw" Jan 29 11:54:46.910456 containerd[1460]: time="2025-01-29T11:54:46.910137247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:46.910456 containerd[1460]: time="2025-01-29T11:54:46.910217739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:46.910456 containerd[1460]: time="2025-01-29T11:54:46.910233128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:46.910920 containerd[1460]: time="2025-01-29T11:54:46.910359507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:46.934087 systemd[1]: Started cri-containerd-4318706122c0d2002d53f9e19cab90a2036481a4201d4d1a41d9fc1ca4ca8ccf.scope - libcontainer container 4318706122c0d2002d53f9e19cab90a2036481a4201d4d1a41d9fc1ca4ca8ccf. Jan 29 11:54:46.948653 kubelet[2494]: E0129 11:54:46.948608 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:46.971194 containerd[1460]: time="2025-01-29T11:54:46.970627104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjsb6,Uid:2842d100-1dcc-40df-8c09-4dfa1b348bb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4318706122c0d2002d53f9e19cab90a2036481a4201d4d1a41d9fc1ca4ca8ccf\"" Jan 29 11:54:46.972665 kubelet[2494]: E0129 11:54:46.972416 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:46.974971 containerd[1460]: time="2025-01-29T11:54:46.974839021Z" level=info msg="CreateContainer within sandbox \"4318706122c0d2002d53f9e19cab90a2036481a4201d4d1a41d9fc1ca4ca8ccf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:54:46.982935 sudo[1631]: pam_unix(sudo:session): session closed for user root Jan 29 11:54:46.985921 sshd[1628]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:46.991397 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:50756.service: Deactivated successfully. Jan 29 11:54:46.994019 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:54:46.994274 systemd[1]: session-7.scope: Consumed 5.515s CPU time, 159.2M memory peak, 0B memory swap peak. Jan 29 11:54:46.994930 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:54:46.996078 systemd-logind[1438]: Removed session 7. Jan 29 11:54:46.997075 containerd[1460]: time="2025-01-29T11:54:46.997022684Z" level=info msg="CreateContainer within sandbox \"4318706122c0d2002d53f9e19cab90a2036481a4201d4d1a41d9fc1ca4ca8ccf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b5d44960a411bd035c7f3e79494373b72f6b82c7aa409928d9984d185379dc93\"" Jan 29 11:54:46.997838 containerd[1460]: time="2025-01-29T11:54:46.997765923Z" level=info msg="StartContainer for \"b5d44960a411bd035c7f3e79494373b72f6b82c7aa409928d9984d185379dc93\"" Jan 29 11:54:47.031640 containerd[1460]: time="2025-01-29T11:54:47.031490090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-8b8xw,Uid:9c2e1b9f-fa38-413a-a5f0-ccd22045501f,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:54:47.039091 systemd[1]: Started cri-containerd-b5d44960a411bd035c7f3e79494373b72f6b82c7aa409928d9984d185379dc93.scope - libcontainer container b5d44960a411bd035c7f3e79494373b72f6b82c7aa409928d9984d185379dc93. Jan 29 11:54:47.062839 containerd[1460]: time="2025-01-29T11:54:47.062554531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:47.062839 containerd[1460]: time="2025-01-29T11:54:47.062674960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:47.062839 containerd[1460]: time="2025-01-29T11:54:47.062695929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:47.063233 containerd[1460]: time="2025-01-29T11:54:47.062859961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:47.084203 containerd[1460]: time="2025-01-29T11:54:47.084153849Z" level=info msg="StartContainer for \"b5d44960a411bd035c7f3e79494373b72f6b82c7aa409928d9984d185379dc93\" returns successfully" Jan 29 11:54:47.090077 systemd[1]: Started cri-containerd-5335b609fc2ec25ce5b72386310e286b2b2f0e44595e7c5d764ad2541b0539d8.scope - libcontainer container 5335b609fc2ec25ce5b72386310e286b2b2f0e44595e7c5d764ad2541b0539d8. Jan 29 11:54:47.139192 containerd[1460]: time="2025-01-29T11:54:47.139137343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-8b8xw,Uid:9c2e1b9f-fa38-413a-a5f0-ccd22045501f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5335b609fc2ec25ce5b72386310e286b2b2f0e44595e7c5d764ad2541b0539d8\"" Jan 29 11:54:47.141509 containerd[1460]: time="2025-01-29T11:54:47.141459854Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:54:47.411082 kubelet[2494]: E0129 11:54:47.411032 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:47.953906 kubelet[2494]: E0129 11:54:47.953856 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:47.954054 kubelet[2494]: E0129 11:54:47.954023 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:47.954276 kubelet[2494]: E0129 11:54:47.954230 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:47.974252 kubelet[2494]: I0129 11:54:47.974156 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pjsb6" podStartSLOduration=1.974129867 podStartE2EDuration="1.974129867s" podCreationTimestamp="2025-01-29 11:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:47.973843975 +0000 UTC m=+6.139521152" watchObservedRunningTime="2025-01-29 11:54:47.974129867 +0000 UTC m=+6.139807034" Jan 29 11:54:49.192019 update_engine[1440]: I20250129 11:54:49.191876 1440 update_attempter.cc:509] Updating boot flags... Jan 29 11:54:49.218839 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2833) Jan 29 11:54:49.259836 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2836) Jan 29 11:54:51.819431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867726570.mount: Deactivated successfully. Jan 29 11:54:53.108493 containerd[1460]: time="2025-01-29T11:54:53.108391926Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:53.110029 containerd[1460]: time="2025-01-29T11:54:53.109956460Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:54:53.111217 containerd[1460]: time="2025-01-29T11:54:53.111174159Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:53.114143 containerd[1460]: time="2025-01-29T11:54:53.114078505Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:53.115009 containerd[1460]: time="2025-01-29T11:54:53.114970620Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.973462084s" Jan 29 11:54:53.115065 containerd[1460]: time="2025-01-29T11:54:53.115019662Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:54:53.118094 containerd[1460]: time="2025-01-29T11:54:53.118060805Z" level=info msg="CreateContainer within sandbox \"5335b609fc2ec25ce5b72386310e286b2b2f0e44595e7c5d764ad2541b0539d8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:54:53.136637 containerd[1460]: time="2025-01-29T11:54:53.136494406Z" level=info msg="CreateContainer within sandbox \"5335b609fc2ec25ce5b72386310e286b2b2f0e44595e7c5d764ad2541b0539d8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"87e1424291e6d0f259b39212ad072c9331dfbeb176e425ffd1497a8ff6c7a6de\"" Jan 29 11:54:53.137500 containerd[1460]: time="2025-01-29T11:54:53.137399966Z" level=info msg="StartContainer for \"87e1424291e6d0f259b39212ad072c9331dfbeb176e425ffd1497a8ff6c7a6de\"" Jan 29 11:54:53.178008 systemd[1]: Started cri-containerd-87e1424291e6d0f259b39212ad072c9331dfbeb176e425ffd1497a8ff6c7a6de.scope - libcontainer container 87e1424291e6d0f259b39212ad072c9331dfbeb176e425ffd1497a8ff6c7a6de. Jan 29 11:54:53.210943 containerd[1460]: time="2025-01-29T11:54:53.210878367Z" level=info msg="StartContainer for \"87e1424291e6d0f259b39212ad072c9331dfbeb176e425ffd1497a8ff6c7a6de\" returns successfully" Jan 29 11:54:54.123440 kubelet[2494]: E0129 11:54:54.123387 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:54.145987 kubelet[2494]: I0129 11:54:54.145458 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-8b8xw" podStartSLOduration=2.169789766 podStartE2EDuration="8.145426081s" podCreationTimestamp="2025-01-29 11:54:46 +0000 UTC" firstStartedPulling="2025-01-29 11:54:47.140580938 +0000 UTC m=+5.306258105" lastFinishedPulling="2025-01-29 11:54:53.116217253 +0000 UTC m=+11.281894420" observedRunningTime="2025-01-29 11:54:54.042323842 +0000 UTC m=+12.208001009" watchObservedRunningTime="2025-01-29 11:54:54.145426081 +0000 UTC m=+12.311103248" Jan 29 11:54:56.221442 systemd[1]: Created slice kubepods-besteffort-pod666715fe_8a5c_41bc_946d_1fd9b726994c.slice - libcontainer container kubepods-besteffort-pod666715fe_8a5c_41bc_946d_1fd9b726994c.slice. Jan 29 11:54:56.270544 systemd[1]: Created slice kubepods-besteffort-pod49f050a4_1b8e_4ac4_af63_e9bfe8c49fdc.slice - libcontainer container kubepods-besteffort-pod49f050a4_1b8e_4ac4_af63_e9bfe8c49fdc.slice. Jan 29 11:54:56.320451 kubelet[2494]: I0129 11:54:56.320079 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/666715fe-8a5c-41bc-946d-1fd9b726994c-tigera-ca-bundle\") pod \"calico-typha-87d8648cf-4fp4n\" (UID: \"666715fe-8a5c-41bc-946d-1fd9b726994c\") " pod="calico-system/calico-typha-87d8648cf-4fp4n" Jan 29 11:54:56.320451 kubelet[2494]: I0129 11:54:56.320146 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/666715fe-8a5c-41bc-946d-1fd9b726994c-typha-certs\") pod \"calico-typha-87d8648cf-4fp4n\" (UID: \"666715fe-8a5c-41bc-946d-1fd9b726994c\") " pod="calico-system/calico-typha-87d8648cf-4fp4n" Jan 29 11:54:56.320451 kubelet[2494]: I0129 11:54:56.320173 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d788k\" (UniqueName: \"kubernetes.io/projected/666715fe-8a5c-41bc-946d-1fd9b726994c-kube-api-access-d788k\") pod \"calico-typha-87d8648cf-4fp4n\" (UID: \"666715fe-8a5c-41bc-946d-1fd9b726994c\") " pod="calico-system/calico-typha-87d8648cf-4fp4n" Jan 29 11:54:56.374752 kubelet[2494]: E0129 11:54:56.374675 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:54:56.421437 kubelet[2494]: I0129 11:54:56.421379 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3601942d-e4d5-4f58-9091-3f7871be8fee-varrun\") pod \"csi-node-driver-rqqp5\" (UID: \"3601942d-e4d5-4f58-9091-3f7871be8fee\") " pod="calico-system/csi-node-driver-rqqp5" Jan 29 11:54:56.421695 kubelet[2494]: I0129 11:54:56.421620 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-node-certs\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.421695 kubelet[2494]: I0129 11:54:56.421665 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-cni-log-dir\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422422 kubelet[2494]: I0129 11:54:56.421705 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-policysync\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422422 kubelet[2494]: I0129 11:54:56.421725 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-tigera-ca-bundle\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422422 kubelet[2494]: I0129 11:54:56.421756 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-var-run-calico\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422422 kubelet[2494]: I0129 11:54:56.421777 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n57w\" (UniqueName: \"kubernetes.io/projected/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-kube-api-access-9n57w\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422422 kubelet[2494]: I0129 11:54:56.421815 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7b6\" (UniqueName: \"kubernetes.io/projected/3601942d-e4d5-4f58-9091-3f7871be8fee-kube-api-access-jg7b6\") pod \"csi-node-driver-rqqp5\" (UID: \"3601942d-e4d5-4f58-9091-3f7871be8fee\") " pod="calico-system/csi-node-driver-rqqp5" Jan 29 11:54:56.422594 kubelet[2494]: I0129 11:54:56.421864 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-cni-net-dir\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422594 kubelet[2494]: I0129 11:54:56.421884 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3601942d-e4d5-4f58-9091-3f7871be8fee-kubelet-dir\") pod \"csi-node-driver-rqqp5\" (UID: \"3601942d-e4d5-4f58-9091-3f7871be8fee\") " pod="calico-system/csi-node-driver-rqqp5" Jan 29 11:54:56.422594 kubelet[2494]: I0129 11:54:56.421906 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3601942d-e4d5-4f58-9091-3f7871be8fee-registration-dir\") pod \"csi-node-driver-rqqp5\" (UID: \"3601942d-e4d5-4f58-9091-3f7871be8fee\") " pod="calico-system/csi-node-driver-rqqp5" Jan 29 11:54:56.422594 kubelet[2494]: I0129 11:54:56.421930 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-flexvol-driver-host\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422594 kubelet[2494]: I0129 11:54:56.421964 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-xtables-lock\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422755 kubelet[2494]: I0129 11:54:56.421986 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-var-lib-calico\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422755 kubelet[2494]: I0129 11:54:56.422009 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-cni-bin-dir\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.422755 kubelet[2494]: I0129 11:54:56.422032 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3601942d-e4d5-4f58-9091-3f7871be8fee-socket-dir\") pod \"csi-node-driver-rqqp5\" (UID: \"3601942d-e4d5-4f58-9091-3f7871be8fee\") " pod="calico-system/csi-node-driver-rqqp5" Jan 29 11:54:56.422755 kubelet[2494]: I0129 11:54:56.422056 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc-lib-modules\") pod \"calico-node-br8rn\" (UID: \"49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc\") " pod="calico-system/calico-node-br8rn" Jan 29 11:54:56.528908 kubelet[2494]: E0129 11:54:56.528599 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:56.530169 containerd[1460]: time="2025-01-29T11:54:56.530116127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-87d8648cf-4fp4n,Uid:666715fe-8a5c-41bc-946d-1fd9b726994c,Namespace:calico-system,Attempt:0,}" Jan 29 11:54:56.534710 kubelet[2494]: E0129 11:54:56.534597 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:54:56.535447 kubelet[2494]: W0129 11:54:56.534904 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:54:56.535447 kubelet[2494]: E0129 11:54:56.534973 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:54:56.538897 kubelet[2494]: E0129 11:54:56.538869 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:54:56.538897 kubelet[2494]: W0129 11:54:56.538891 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:54:56.539002 kubelet[2494]: E0129 11:54:56.538915 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:54:56.542858 kubelet[2494]: E0129 11:54:56.540834 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:54:56.542858 kubelet[2494]: W0129 11:54:56.540855 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:54:56.542858 kubelet[2494]: E0129 11:54:56.540871 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:54:56.572408 containerd[1460]: time="2025-01-29T11:54:56.572070106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:56.572408 containerd[1460]: time="2025-01-29T11:54:56.572143545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:56.572408 containerd[1460]: time="2025-01-29T11:54:56.572155267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:56.572408 containerd[1460]: time="2025-01-29T11:54:56.572279251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:56.574863 kubelet[2494]: E0129 11:54:56.574828 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:56.575499 containerd[1460]: time="2025-01-29T11:54:56.575459619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-br8rn,Uid:49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc,Namespace:calico-system,Attempt:0,}" Jan 29 11:54:56.601107 systemd[1]: Started cri-containerd-594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96.scope - libcontainer container 594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96. Jan 29 11:54:56.614747 containerd[1460]: time="2025-01-29T11:54:56.614574895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:56.614747 containerd[1460]: time="2025-01-29T11:54:56.614667960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:56.614747 containerd[1460]: time="2025-01-29T11:54:56.614694520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:56.615002 containerd[1460]: time="2025-01-29T11:54:56.614882314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:56.648385 systemd[1]: Started cri-containerd-ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf.scope - libcontainer container ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf. Jan 29 11:54:56.684304 containerd[1460]: time="2025-01-29T11:54:56.684224530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-87d8648cf-4fp4n,Uid:666715fe-8a5c-41bc-946d-1fd9b726994c,Namespace:calico-system,Attempt:0,} returns sandbox id \"594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96\"" Jan 29 11:54:56.693958 kubelet[2494]: E0129 11:54:56.693760 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:56.699036 containerd[1460]: time="2025-01-29T11:54:56.698879932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:54:56.714373 containerd[1460]: time="2025-01-29T11:54:56.714286730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-br8rn,Uid:49f050a4-1b8e-4ac4-af63-e9bfe8c49fdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf\"" Jan 29 11:54:56.716152 kubelet[2494]: E0129 11:54:56.715503 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:57.925847 kubelet[2494]: E0129 11:54:57.925730 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:54:58.959062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount142256183.mount: Deactivated successfully. Jan 29 11:54:59.341001 containerd[1460]: time="2025-01-29T11:54:59.340936359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:59.341740 containerd[1460]: time="2025-01-29T11:54:59.341677986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 11:54:59.350509 containerd[1460]: time="2025-01-29T11:54:59.350440296Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:59.414385 containerd[1460]: time="2025-01-29T11:54:59.414285348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:59.415140 containerd[1460]: time="2025-01-29T11:54:59.415091788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.716155199s" Jan 29 11:54:59.415239 containerd[1460]: time="2025-01-29T11:54:59.415142613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:54:59.426332 containerd[1460]: time="2025-01-29T11:54:59.426268426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:54:59.480116 containerd[1460]: time="2025-01-29T11:54:59.480062902Z" level=info msg="CreateContainer within sandbox \"594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:54:59.514463 containerd[1460]: time="2025-01-29T11:54:59.514387805Z" level=info msg="CreateContainer within sandbox \"594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\"" Jan 29 11:54:59.516810 containerd[1460]: time="2025-01-29T11:54:59.516717845Z" level=info msg="StartContainer for \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\"" Jan 29 11:54:59.554036 systemd[1]: Started cri-containerd-f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6.scope - libcontainer container f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6. Jan 29 11:54:59.606100 containerd[1460]: time="2025-01-29T11:54:59.605955541Z" level=info msg="StartContainer for \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\" returns successfully" Jan 29 11:54:59.938412 kubelet[2494]: E0129 11:54:59.938254 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:55:00.047385 kubelet[2494]: E0129 11:55:00.047348 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:00.062547 kubelet[2494]: I0129 11:55:00.062480 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-87d8648cf-4fp4n" podStartSLOduration=1.332972932 podStartE2EDuration="4.06245219s" podCreationTimestamp="2025-01-29 11:54:56 +0000 UTC" firstStartedPulling="2025-01-29 11:54:56.696459627 +0000 UTC m=+14.862136794" lastFinishedPulling="2025-01-29 11:54:59.425938885 +0000 UTC m=+17.591616052" observedRunningTime="2025-01-29 11:55:00.059919079 +0000 UTC m=+18.225596246" watchObservedRunningTime="2025-01-29 11:55:00.06245219 +0000 UTC m=+18.228129357" Jan 29 11:55:00.146948 kubelet[2494]: E0129 11:55:00.146891 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.146948 kubelet[2494]: W0129 11:55:00.146933 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.150336 kubelet[2494]: E0129 11:55:00.150294 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.150616 kubelet[2494]: E0129 11:55:00.150595 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.150616 kubelet[2494]: W0129 11:55:00.150612 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.150694 kubelet[2494]: E0129 11:55:00.150627 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.150895 kubelet[2494]: E0129 11:55:00.150873 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.150895 kubelet[2494]: W0129 11:55:00.150885 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.150895 kubelet[2494]: E0129 11:55:00.150894 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.151188 kubelet[2494]: E0129 11:55:00.151170 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.151188 kubelet[2494]: W0129 11:55:00.151186 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.151273 kubelet[2494]: E0129 11:55:00.151201 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.151461 kubelet[2494]: E0129 11:55:00.151447 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.151461 kubelet[2494]: W0129 11:55:00.151460 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.151522 kubelet[2494]: E0129 11:55:00.151470 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.151679 kubelet[2494]: E0129 11:55:00.151663 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.151679 kubelet[2494]: W0129 11:55:00.151677 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.151765 kubelet[2494]: E0129 11:55:00.151687 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.151957 kubelet[2494]: E0129 11:55:00.151936 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.151957 kubelet[2494]: W0129 11:55:00.151953 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.152061 kubelet[2494]: E0129 11:55:00.151968 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.152274 kubelet[2494]: E0129 11:55:00.152259 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.152274 kubelet[2494]: W0129 11:55:00.152274 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.152373 kubelet[2494]: E0129 11:55:00.152286 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.152541 kubelet[2494]: E0129 11:55:00.152526 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.152541 kubelet[2494]: W0129 11:55:00.152540 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.152603 kubelet[2494]: E0129 11:55:00.152551 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.152810 kubelet[2494]: E0129 11:55:00.152776 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.152859 kubelet[2494]: W0129 11:55:00.152822 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.152859 kubelet[2494]: E0129 11:55:00.152836 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.153082 kubelet[2494]: E0129 11:55:00.153064 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.153082 kubelet[2494]: W0129 11:55:00.153076 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.153153 kubelet[2494]: E0129 11:55:00.153086 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.153334 kubelet[2494]: E0129 11:55:00.153318 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.153334 kubelet[2494]: W0129 11:55:00.153332 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.153401 kubelet[2494]: E0129 11:55:00.153343 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.153577 kubelet[2494]: E0129 11:55:00.153563 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.153577 kubelet[2494]: W0129 11:55:00.153576 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.153632 kubelet[2494]: E0129 11:55:00.153586 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.153860 kubelet[2494]: E0129 11:55:00.153840 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.153860 kubelet[2494]: W0129 11:55:00.153856 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.153961 kubelet[2494]: E0129 11:55:00.153870 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.154125 kubelet[2494]: E0129 11:55:00.154109 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.154125 kubelet[2494]: W0129 11:55:00.154122 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.154217 kubelet[2494]: E0129 11:55:00.154132 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.246914 kubelet[2494]: E0129 11:55:00.246725 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.246914 kubelet[2494]: W0129 11:55:00.246763 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.246914 kubelet[2494]: E0129 11:55:00.246822 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.247137 kubelet[2494]: E0129 11:55:00.247111 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.247137 kubelet[2494]: W0129 11:55:00.247123 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.247228 kubelet[2494]: E0129 11:55:00.247145 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.247438 kubelet[2494]: E0129 11:55:00.247418 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.247438 kubelet[2494]: W0129 11:55:00.247432 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.247531 kubelet[2494]: E0129 11:55:00.247451 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.248070 kubelet[2494]: E0129 11:55:00.248035 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.248070 kubelet[2494]: W0129 11:55:00.248052 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.248070 kubelet[2494]: E0129 11:55:00.248109 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.248524 kubelet[2494]: E0129 11:55:00.248504 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.248524 kubelet[2494]: W0129 11:55:00.248521 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.248745 kubelet[2494]: E0129 11:55:00.248689 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.248936 kubelet[2494]: E0129 11:55:00.248901 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.248936 kubelet[2494]: W0129 11:55:00.248921 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.249027 kubelet[2494]: E0129 11:55:00.248966 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.249169 kubelet[2494]: E0129 11:55:00.249148 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.249169 kubelet[2494]: W0129 11:55:00.249161 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.249261 kubelet[2494]: E0129 11:55:00.249190 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.249420 kubelet[2494]: E0129 11:55:00.249391 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.249420 kubelet[2494]: W0129 11:55:00.249405 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.249497 kubelet[2494]: E0129 11:55:00.249423 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.249733 kubelet[2494]: E0129 11:55:00.249699 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.249733 kubelet[2494]: W0129 11:55:00.249714 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.249733 kubelet[2494]: E0129 11:55:00.249730 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.250049 kubelet[2494]: E0129 11:55:00.250022 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.250049 kubelet[2494]: W0129 11:55:00.250039 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.250140 kubelet[2494]: E0129 11:55:00.250058 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.250325 kubelet[2494]: E0129 11:55:00.250302 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.250325 kubelet[2494]: W0129 11:55:00.250315 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.250416 kubelet[2494]: E0129 11:55:00.250334 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.250632 kubelet[2494]: E0129 11:55:00.250610 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.250632 kubelet[2494]: W0129 11:55:00.250624 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.250734 kubelet[2494]: E0129 11:55:00.250677 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.250945 kubelet[2494]: E0129 11:55:00.250884 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.250945 kubelet[2494]: W0129 11:55:00.250896 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.250945 kubelet[2494]: E0129 11:55:00.250930 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.251253 kubelet[2494]: E0129 11:55:00.251227 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.251253 kubelet[2494]: W0129 11:55:00.251242 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.251336 kubelet[2494]: E0129 11:55:00.251265 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.251548 kubelet[2494]: E0129 11:55:00.251525 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.251548 kubelet[2494]: W0129 11:55:00.251537 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.251635 kubelet[2494]: E0129 11:55:00.251558 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.251868 kubelet[2494]: E0129 11:55:00.251843 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.251868 kubelet[2494]: W0129 11:55:00.251857 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.251959 kubelet[2494]: E0129 11:55:00.251876 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.252237 kubelet[2494]: E0129 11:55:00.252212 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.252237 kubelet[2494]: W0129 11:55:00.252230 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.252329 kubelet[2494]: E0129 11:55:00.252253 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:00.252527 kubelet[2494]: E0129 11:55:00.252507 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:00.252527 kubelet[2494]: W0129 11:55:00.252520 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:00.252527 kubelet[2494]: E0129 11:55:00.252528 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.049411 kubelet[2494]: E0129 11:55:01.049356 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:01.058585 kubelet[2494]: E0129 11:55:01.058551 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.058585 kubelet[2494]: W0129 11:55:01.058573 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.058770 kubelet[2494]: E0129 11:55:01.058597 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.058873 kubelet[2494]: E0129 11:55:01.058852 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.058873 kubelet[2494]: W0129 11:55:01.058868 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.058958 kubelet[2494]: E0129 11:55:01.058880 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.059107 kubelet[2494]: E0129 11:55:01.059089 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.059107 kubelet[2494]: W0129 11:55:01.059102 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.059210 kubelet[2494]: E0129 11:55:01.059114 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.059412 kubelet[2494]: E0129 11:55:01.059394 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.059412 kubelet[2494]: W0129 11:55:01.059408 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.059495 kubelet[2494]: E0129 11:55:01.059420 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.059665 kubelet[2494]: E0129 11:55:01.059642 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.059665 kubelet[2494]: W0129 11:55:01.059657 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.059742 kubelet[2494]: E0129 11:55:01.059669 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.059931 kubelet[2494]: E0129 11:55:01.059911 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.059931 kubelet[2494]: W0129 11:55:01.059926 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.059994 kubelet[2494]: E0129 11:55:01.059938 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.060199 kubelet[2494]: E0129 11:55:01.060173 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.060199 kubelet[2494]: W0129 11:55:01.060186 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.060256 kubelet[2494]: E0129 11:55:01.060207 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.060446 kubelet[2494]: E0129 11:55:01.060425 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.060446 kubelet[2494]: W0129 11:55:01.060440 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.060486 kubelet[2494]: E0129 11:55:01.060450 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.060695 kubelet[2494]: E0129 11:55:01.060665 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.060695 kubelet[2494]: W0129 11:55:01.060678 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.060695 kubelet[2494]: E0129 11:55:01.060689 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.060847 containerd[1460]: time="2025-01-29T11:55:01.060762663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:01.061206 kubelet[2494]: E0129 11:55:01.060980 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.061206 kubelet[2494]: W0129 11:55:01.060989 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.061206 kubelet[2494]: E0129 11:55:01.060998 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.061206 kubelet[2494]: E0129 11:55:01.061183 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.061206 kubelet[2494]: W0129 11:55:01.061204 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.061388 kubelet[2494]: E0129 11:55:01.061214 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.061433 kubelet[2494]: E0129 11:55:01.061421 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.061433 kubelet[2494]: W0129 11:55:01.061430 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.061492 kubelet[2494]: E0129 11:55:01.061439 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.061659 kubelet[2494]: E0129 11:55:01.061643 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.061659 kubelet[2494]: W0129 11:55:01.061651 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.061659 kubelet[2494]: E0129 11:55:01.061659 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.061774 containerd[1460]: time="2025-01-29T11:55:01.061741326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 11:55:01.061891 kubelet[2494]: E0129 11:55:01.061861 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.061891 kubelet[2494]: W0129 11:55:01.061873 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.061891 kubelet[2494]: E0129 11:55:01.061884 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.062121 kubelet[2494]: E0129 11:55:01.062105 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.062121 kubelet[2494]: W0129 11:55:01.062115 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.062185 kubelet[2494]: E0129 11:55:01.062125 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.063000 containerd[1460]: time="2025-01-29T11:55:01.062968496Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:01.065589 containerd[1460]: time="2025-01-29T11:55:01.065550990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:01.066155 containerd[1460]: time="2025-01-29T11:55:01.066126222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.63980136s" Jan 29 11:55:01.066216 containerd[1460]: time="2025-01-29T11:55:01.066154796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:55:01.067978 containerd[1460]: time="2025-01-29T11:55:01.067953002Z" level=info msg="CreateContainer within sandbox \"ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:55:01.094553 containerd[1460]: time="2025-01-29T11:55:01.094485594Z" level=info msg="CreateContainer within sandbox \"ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5\"" Jan 29 11:55:01.095172 containerd[1460]: time="2025-01-29T11:55:01.095138123Z" level=info msg="StartContainer for \"d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5\"" Jan 29 11:55:01.135024 systemd[1]: Started cri-containerd-d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5.scope - libcontainer container d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5. Jan 29 11:55:01.154242 kubelet[2494]: E0129 11:55:01.153937 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.154242 kubelet[2494]: W0129 11:55:01.153970 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.154242 kubelet[2494]: E0129 11:55:01.154005 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.154585 kubelet[2494]: E0129 11:55:01.154567 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.154663 kubelet[2494]: W0129 11:55:01.154647 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.154753 kubelet[2494]: E0129 11:55:01.154736 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.155137 kubelet[2494]: E0129 11:55:01.155116 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.155567 kubelet[2494]: W0129 11:55:01.155275 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.155567 kubelet[2494]: E0129 11:55:01.155298 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.155761 kubelet[2494]: E0129 11:55:01.155742 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.155867 kubelet[2494]: W0129 11:55:01.155848 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.155971 kubelet[2494]: E0129 11:55:01.155950 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.156668 kubelet[2494]: E0129 11:55:01.156501 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.156668 kubelet[2494]: W0129 11:55:01.156518 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.156668 kubelet[2494]: E0129 11:55:01.156537 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.156919 kubelet[2494]: E0129 11:55:01.156834 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.156919 kubelet[2494]: W0129 11:55:01.156845 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.156919 kubelet[2494]: E0129 11:55:01.156868 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.157081 kubelet[2494]: E0129 11:55:01.157051 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.157081 kubelet[2494]: W0129 11:55:01.157064 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.157275 kubelet[2494]: E0129 11:55:01.157118 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.157446 kubelet[2494]: E0129 11:55:01.157386 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.157446 kubelet[2494]: W0129 11:55:01.157407 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.157446 kubelet[2494]: E0129 11:55:01.157439 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.157827 kubelet[2494]: E0129 11:55:01.157774 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.157827 kubelet[2494]: W0129 11:55:01.157809 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.157913 kubelet[2494]: E0129 11:55:01.157842 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.158201 kubelet[2494]: E0129 11:55:01.158161 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.158201 kubelet[2494]: W0129 11:55:01.158177 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.158281 kubelet[2494]: E0129 11:55:01.158204 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.158499 kubelet[2494]: E0129 11:55:01.158461 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.158499 kubelet[2494]: W0129 11:55:01.158477 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.158585 kubelet[2494]: E0129 11:55:01.158508 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.158984 kubelet[2494]: E0129 11:55:01.158960 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.158984 kubelet[2494]: W0129 11:55:01.158980 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.159141 kubelet[2494]: E0129 11:55:01.159094 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.159318 kubelet[2494]: E0129 11:55:01.159293 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.159318 kubelet[2494]: W0129 11:55:01.159310 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.159396 kubelet[2494]: E0129 11:55:01.159331 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.159679 kubelet[2494]: E0129 11:55:01.159661 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.159679 kubelet[2494]: W0129 11:55:01.159676 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.159775 kubelet[2494]: E0129 11:55:01.159689 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.160188 kubelet[2494]: E0129 11:55:01.160078 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.160188 kubelet[2494]: W0129 11:55:01.160094 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.160188 kubelet[2494]: E0129 11:55:01.160107 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.160344 kubelet[2494]: E0129 11:55:01.160327 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.160344 kubelet[2494]: W0129 11:55:01.160340 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.160407 kubelet[2494]: E0129 11:55:01.160350 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.160637 kubelet[2494]: E0129 11:55:01.160620 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.160637 kubelet[2494]: W0129 11:55:01.160632 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.160637 kubelet[2494]: E0129 11:55:01.160640 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.161262 kubelet[2494]: E0129 11:55:01.161245 2494 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:55:01.161262 kubelet[2494]: W0129 11:55:01.161258 2494 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:55:01.161358 kubelet[2494]: E0129 11:55:01.161269 2494 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:55:01.172046 containerd[1460]: time="2025-01-29T11:55:01.171989636Z" level=info msg="StartContainer for \"d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5\" returns successfully" Jan 29 11:55:01.198398 systemd[1]: cri-containerd-d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5.scope: Deactivated successfully. Jan 29 11:55:01.224610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5-rootfs.mount: Deactivated successfully. Jan 29 11:55:01.553416 containerd[1460]: time="2025-01-29T11:55:01.553329432Z" level=info msg="shim disconnected" id=d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5 namespace=k8s.io Jan 29 11:55:01.553416 containerd[1460]: time="2025-01-29T11:55:01.553400565Z" level=warning msg="cleaning up after shim disconnected" id=d7971345c505eabcf8b4b575ab899b6c23e9efe05e8554f5d7abe8b4d5fb8bc5 namespace=k8s.io Jan 29 11:55:01.553416 containerd[1460]: time="2025-01-29T11:55:01.553412467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:55:01.926208 kubelet[2494]: E0129 11:55:01.926050 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:55:02.052400 kubelet[2494]: E0129 11:55:02.052355 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:02.052872 kubelet[2494]: E0129 11:55:02.052528 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:02.054096 containerd[1460]: time="2025-01-29T11:55:02.053682826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:55:03.925725 kubelet[2494]: E0129 11:55:03.925641 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:55:05.925996 kubelet[2494]: E0129 11:55:05.925917 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:55:07.290185 containerd[1460]: time="2025-01-29T11:55:07.290140874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:07.290990 containerd[1460]: time="2025-01-29T11:55:07.290955576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:55:07.292242 containerd[1460]: time="2025-01-29T11:55:07.292214624Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:07.295628 containerd[1460]: time="2025-01-29T11:55:07.295567109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:07.296299 containerd[1460]: time="2025-01-29T11:55:07.296264429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.242540166s" Jan 29 11:55:07.296299 containerd[1460]: time="2025-01-29T11:55:07.296295899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:55:07.298365 containerd[1460]: time="2025-01-29T11:55:07.298308023Z" level=info msg="CreateContainer within sandbox \"ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:55:07.318057 containerd[1460]: time="2025-01-29T11:55:07.317993640Z" level=info msg="CreateContainer within sandbox \"ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190\"" Jan 29 11:55:07.318645 containerd[1460]: time="2025-01-29T11:55:07.318612654Z" level=info msg="StartContainer for \"8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190\"" Jan 29 11:55:07.357943 systemd[1]: Started cri-containerd-8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190.scope - libcontainer container 8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190. Jan 29 11:55:07.392781 containerd[1460]: time="2025-01-29T11:55:07.392713377Z" level=info msg="StartContainer for \"8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190\" returns successfully" Jan 29 11:55:07.925602 kubelet[2494]: E0129 11:55:07.925533 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:55:08.064976 kubelet[2494]: E0129 11:55:08.064934 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:09.066725 kubelet[2494]: E0129 11:55:09.066679 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:09.638464 systemd[1]: cri-containerd-8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190.scope: Deactivated successfully. Jan 29 11:55:09.645397 kubelet[2494]: I0129 11:55:09.645367 2494 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:55:09.662873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190-rootfs.mount: Deactivated successfully. Jan 29 11:55:09.827546 systemd[1]: Created slice kubepods-burstable-pod8cdf9998_9f22_47fa_be10_e527ac360095.slice - libcontainer container kubepods-burstable-pod8cdf9998_9f22_47fa_be10_e527ac360095.slice. Jan 29 11:55:09.842924 systemd[1]: Created slice kubepods-besteffort-podbd0ce080_79db_4e06_87fe_bc35e2d0e23b.slice - libcontainer container kubepods-besteffort-podbd0ce080_79db_4e06_87fe_bc35e2d0e23b.slice. Jan 29 11:55:09.848497 systemd[1]: Created slice kubepods-burstable-pod79fca267_9a26_4684_b71e_b7f100ade442.slice - libcontainer container kubepods-burstable-pod79fca267_9a26_4684_b71e_b7f100ade442.slice. Jan 29 11:55:09.853696 systemd[1]: Created slice kubepods-besteffort-pod83ff0ee0_50a2_4a27_851e_d262c1a81765.slice - libcontainer container kubepods-besteffort-pod83ff0ee0_50a2_4a27_851e_d262c1a81765.slice. Jan 29 11:55:09.857418 systemd[1]: Created slice kubepods-besteffort-pod130669b1_1d96_4e3f_83e0_176296743cad.slice - libcontainer container kubepods-besteffort-pod130669b1_1d96_4e3f_83e0_176296743cad.slice. Jan 29 11:55:09.913926 kubelet[2494]: I0129 11:55:09.913721 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh55p\" (UniqueName: \"kubernetes.io/projected/8cdf9998-9f22-47fa-be10-e527ac360095-kube-api-access-kh55p\") pod \"coredns-668d6bf9bc-q88sz\" (UID: \"8cdf9998-9f22-47fa-be10-e527ac360095\") " pod="kube-system/coredns-668d6bf9bc-q88sz" Jan 29 11:55:09.913926 kubelet[2494]: I0129 11:55:09.913771 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cdf9998-9f22-47fa-be10-e527ac360095-config-volume\") pod \"coredns-668d6bf9bc-q88sz\" (UID: \"8cdf9998-9f22-47fa-be10-e527ac360095\") " pod="kube-system/coredns-668d6bf9bc-q88sz" Jan 29 11:55:09.934051 systemd[1]: Created slice kubepods-besteffort-pod3601942d_e4d5_4f58_9091_3f7871be8fee.slice - libcontainer container kubepods-besteffort-pod3601942d_e4d5_4f58_9091_3f7871be8fee.slice. Jan 29 11:55:09.937036 containerd[1460]: time="2025-01-29T11:55:09.936984219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqqp5,Uid:3601942d-e4d5-4f58-9091-3f7871be8fee,Namespace:calico-system,Attempt:0,}" Jan 29 11:55:10.014616 kubelet[2494]: I0129 11:55:10.014527 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn8sb\" (UniqueName: \"kubernetes.io/projected/bd0ce080-79db-4e06-87fe-bc35e2d0e23b-kube-api-access-tn8sb\") pod \"calico-apiserver-7857f547f9-l6br2\" (UID: \"bd0ce080-79db-4e06-87fe-bc35e2d0e23b\") " pod="calico-apiserver/calico-apiserver-7857f547f9-l6br2" Jan 29 11:55:10.014616 kubelet[2494]: I0129 11:55:10.014590 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd0ce080-79db-4e06-87fe-bc35e2d0e23b-calico-apiserver-certs\") pod \"calico-apiserver-7857f547f9-l6br2\" (UID: \"bd0ce080-79db-4e06-87fe-bc35e2d0e23b\") " pod="calico-apiserver/calico-apiserver-7857f547f9-l6br2" Jan 29 11:55:10.014863 kubelet[2494]: I0129 11:55:10.014645 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkn6q\" (UniqueName: \"kubernetes.io/projected/130669b1-1d96-4e3f-83e0-176296743cad-kube-api-access-zkn6q\") pod \"calico-apiserver-7857f547f9-cj9n8\" (UID: \"130669b1-1d96-4e3f-83e0-176296743cad\") " pod="calico-apiserver/calico-apiserver-7857f547f9-cj9n8" Jan 29 11:55:10.014863 kubelet[2494]: I0129 11:55:10.014677 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83ff0ee0-50a2-4a27-851e-d262c1a81765-tigera-ca-bundle\") pod \"calico-kube-controllers-d77bcc79-l7ddq\" (UID: \"83ff0ee0-50a2-4a27-851e-d262c1a81765\") " pod="calico-system/calico-kube-controllers-d77bcc79-l7ddq" Jan 29 11:55:10.014863 kubelet[2494]: I0129 11:55:10.014703 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/130669b1-1d96-4e3f-83e0-176296743cad-calico-apiserver-certs\") pod \"calico-apiserver-7857f547f9-cj9n8\" (UID: \"130669b1-1d96-4e3f-83e0-176296743cad\") " pod="calico-apiserver/calico-apiserver-7857f547f9-cj9n8" Jan 29 11:55:10.014863 kubelet[2494]: I0129 11:55:10.014745 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8l4m\" (UniqueName: \"kubernetes.io/projected/83ff0ee0-50a2-4a27-851e-d262c1a81765-kube-api-access-p8l4m\") pod \"calico-kube-controllers-d77bcc79-l7ddq\" (UID: \"83ff0ee0-50a2-4a27-851e-d262c1a81765\") " pod="calico-system/calico-kube-controllers-d77bcc79-l7ddq" Jan 29 11:55:10.014863 kubelet[2494]: I0129 11:55:10.014771 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79fca267-9a26-4684-b71e-b7f100ade442-config-volume\") pod \"coredns-668d6bf9bc-sjsdr\" (UID: \"79fca267-9a26-4684-b71e-b7f100ade442\") " pod="kube-system/coredns-668d6bf9bc-sjsdr" Jan 29 11:55:10.014994 kubelet[2494]: I0129 11:55:10.014821 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4stp5\" (UniqueName: \"kubernetes.io/projected/79fca267-9a26-4684-b71e-b7f100ade442-kube-api-access-4stp5\") pod \"coredns-668d6bf9bc-sjsdr\" (UID: \"79fca267-9a26-4684-b71e-b7f100ade442\") " pod="kube-system/coredns-668d6bf9bc-sjsdr" Jan 29 11:55:10.055962 containerd[1460]: time="2025-01-29T11:55:10.055884040Z" level=info msg="shim disconnected" id=8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190 namespace=k8s.io Jan 29 11:55:10.055962 containerd[1460]: time="2025-01-29T11:55:10.055955023Z" level=warning msg="cleaning up after shim disconnected" id=8792478d492f15e4b5680dd66f3bb994170d793e1a6e06e6afe0aa99e57b1190 namespace=k8s.io Jan 29 11:55:10.055962 containerd[1460]: time="2025-01-29T11:55:10.055967256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:55:10.131249 kubelet[2494]: E0129 11:55:10.131195 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:10.133348 containerd[1460]: time="2025-01-29T11:55:10.133173014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q88sz,Uid:8cdf9998-9f22-47fa-be10-e527ac360095,Namespace:kube-system,Attempt:0,}" Jan 29 11:55:10.147603 containerd[1460]: time="2025-01-29T11:55:10.147548031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7857f547f9-l6br2,Uid:bd0ce080-79db-4e06-87fe-bc35e2d0e23b,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:55:10.150951 kubelet[2494]: E0129 11:55:10.150899 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:10.151782 containerd[1460]: time="2025-01-29T11:55:10.151724300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjsdr,Uid:79fca267-9a26-4684-b71e-b7f100ade442,Namespace:kube-system,Attempt:0,}" Jan 29 11:55:10.156671 containerd[1460]: time="2025-01-29T11:55:10.156616876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d77bcc79-l7ddq,Uid:83ff0ee0-50a2-4a27-851e-d262c1a81765,Namespace:calico-system,Attempt:0,}" Jan 29 11:55:10.158197 containerd[1460]: time="2025-01-29T11:55:10.158152522Z" level=error msg="Failed to destroy network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.158607 containerd[1460]: time="2025-01-29T11:55:10.158564577Z" level=error msg="encountered an error cleaning up failed sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.158670 containerd[1460]: time="2025-01-29T11:55:10.158622175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqqp5,Uid:3601942d-e4d5-4f58-9091-3f7871be8fee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.158911 kubelet[2494]: E0129 11:55:10.158862 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.158990 kubelet[2494]: E0129 11:55:10.158945 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rqqp5" Jan 29 11:55:10.158990 kubelet[2494]: E0129 11:55:10.158978 2494 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rqqp5" Jan 29 11:55:10.159098 kubelet[2494]: E0129 11:55:10.159033 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rqqp5_calico-system(3601942d-e4d5-4f58-9091-3f7871be8fee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rqqp5_calico-system(3601942d-e4d5-4f58-9091-3f7871be8fee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:55:10.160845 containerd[1460]: time="2025-01-29T11:55:10.160779149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7857f547f9-cj9n8,Uid:130669b1-1d96-4e3f-83e0-176296743cad,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:55:10.233297 containerd[1460]: time="2025-01-29T11:55:10.232314329Z" level=error msg="Failed to destroy network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.234132 containerd[1460]: time="2025-01-29T11:55:10.234014505Z" level=error msg="encountered an error cleaning up failed sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.234132 containerd[1460]: time="2025-01-29T11:55:10.234094916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q88sz,Uid:8cdf9998-9f22-47fa-be10-e527ac360095,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.234371 kubelet[2494]: E0129 11:55:10.234334 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.234462 kubelet[2494]: E0129 11:55:10.234400 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q88sz" Jan 29 11:55:10.234462 kubelet[2494]: E0129 11:55:10.234423 2494 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q88sz" Jan 29 11:55:10.234521 kubelet[2494]: E0129 11:55:10.234469 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q88sz_kube-system(8cdf9998-9f22-47fa-be10-e527ac360095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q88sz_kube-system(8cdf9998-9f22-47fa-be10-e527ac360095)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q88sz" podUID="8cdf9998-9f22-47fa-be10-e527ac360095" Jan 29 11:55:10.281554 containerd[1460]: time="2025-01-29T11:55:10.281393253Z" level=error msg="Failed to destroy network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.282173 containerd[1460]: time="2025-01-29T11:55:10.282144073Z" level=error msg="encountered an error cleaning up failed sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.282332 containerd[1460]: time="2025-01-29T11:55:10.282300778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d77bcc79-l7ddq,Uid:83ff0ee0-50a2-4a27-851e-d262c1a81765,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.282761 kubelet[2494]: E0129 11:55:10.282710 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.282904 kubelet[2494]: E0129 11:55:10.282783 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d77bcc79-l7ddq" Jan 29 11:55:10.282904 kubelet[2494]: E0129 11:55:10.282824 2494 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d77bcc79-l7ddq" Jan 29 11:55:10.282904 kubelet[2494]: E0129 11:55:10.282871 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d77bcc79-l7ddq_calico-system(83ff0ee0-50a2-4a27-851e-d262c1a81765)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d77bcc79-l7ddq_calico-system(83ff0ee0-50a2-4a27-851e-d262c1a81765)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d77bcc79-l7ddq" podUID="83ff0ee0-50a2-4a27-851e-d262c1a81765" Jan 29 11:55:10.291359 containerd[1460]: time="2025-01-29T11:55:10.291278752Z" level=error msg="Failed to destroy network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.292062 containerd[1460]: time="2025-01-29T11:55:10.292012081Z" level=error msg="encountered an error cleaning up failed sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.292136 containerd[1460]: time="2025-01-29T11:55:10.292098333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7857f547f9-l6br2,Uid:bd0ce080-79db-4e06-87fe-bc35e2d0e23b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.292654 kubelet[2494]: E0129 11:55:10.292478 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.292654 kubelet[2494]: E0129 11:55:10.292592 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7857f547f9-l6br2" Jan 29 11:55:10.292654 kubelet[2494]: E0129 11:55:10.292617 2494 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7857f547f9-l6br2" Jan 29 11:55:10.293821 kubelet[2494]: E0129 11:55:10.292859 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7857f547f9-l6br2_calico-apiserver(bd0ce080-79db-4e06-87fe-bc35e2d0e23b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7857f547f9-l6br2_calico-apiserver(bd0ce080-79db-4e06-87fe-bc35e2d0e23b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7857f547f9-l6br2" podUID="bd0ce080-79db-4e06-87fe-bc35e2d0e23b" Jan 29 11:55:10.302758 containerd[1460]: time="2025-01-29T11:55:10.302686834Z" level=error msg="Failed to destroy network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.303362 containerd[1460]: time="2025-01-29T11:55:10.303324823Z" level=error msg="encountered an error cleaning up failed sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.303434 containerd[1460]: time="2025-01-29T11:55:10.303399193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjsdr,Uid:79fca267-9a26-4684-b71e-b7f100ade442,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.303713 kubelet[2494]: E0129 11:55:10.303656 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.303713 kubelet[2494]: E0129 11:55:10.303718 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sjsdr" Jan 29 11:55:10.304019 kubelet[2494]: E0129 11:55:10.303738 2494 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sjsdr" Jan 29 11:55:10.304019 kubelet[2494]: E0129 11:55:10.303786 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sjsdr_kube-system(79fca267-9a26-4684-b71e-b7f100ade442)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sjsdr_kube-system(79fca267-9a26-4684-b71e-b7f100ade442)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sjsdr" podUID="79fca267-9a26-4684-b71e-b7f100ade442" Jan 29 11:55:10.309260 containerd[1460]: time="2025-01-29T11:55:10.309198643Z" level=error msg="Failed to destroy network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.309718 containerd[1460]: time="2025-01-29T11:55:10.309669818Z" level=error msg="encountered an error cleaning up failed sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.309769 containerd[1460]: time="2025-01-29T11:55:10.309752133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7857f547f9-cj9n8,Uid:130669b1-1d96-4e3f-83e0-176296743cad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.310090 kubelet[2494]: E0129 11:55:10.310028 2494 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:10.310154 kubelet[2494]: E0129 11:55:10.310104 2494 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7857f547f9-cj9n8" Jan 29 11:55:10.310154 kubelet[2494]: E0129 11:55:10.310132 2494 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7857f547f9-cj9n8" Jan 29 11:55:10.310237 kubelet[2494]: E0129 11:55:10.310199 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7857f547f9-cj9n8_calico-apiserver(130669b1-1d96-4e3f-83e0-176296743cad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7857f547f9-cj9n8_calico-apiserver(130669b1-1d96-4e3f-83e0-176296743cad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7857f547f9-cj9n8" podUID="130669b1-1d96-4e3f-83e0-176296743cad" Jan 29 11:55:10.669233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4-shm.mount: Deactivated successfully. Jan 29 11:55:11.071285 kubelet[2494]: I0129 11:55:11.071243 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:11.072051 containerd[1460]: time="2025-01-29T11:55:11.072011646Z" level=info msg="StopPodSandbox for \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\"" Jan 29 11:55:11.072685 containerd[1460]: time="2025-01-29T11:55:11.072645367Z" level=info msg="Ensure that sandbox ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4 in task-service has been cleanup successfully" Jan 29 11:55:11.074724 kubelet[2494]: E0129 11:55:11.073554 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:11.074869 containerd[1460]: time="2025-01-29T11:55:11.074270270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:55:11.077846 kubelet[2494]: I0129 11:55:11.077555 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:11.078545 containerd[1460]: time="2025-01-29T11:55:11.078489018Z" level=info msg="StopPodSandbox for \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\"" Jan 29 11:55:11.078932 containerd[1460]: time="2025-01-29T11:55:11.078869564Z" level=info msg="Ensure that sandbox 43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112 in task-service has been cleanup successfully" Jan 29 11:55:11.079166 kubelet[2494]: I0129 11:55:11.079043 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:11.079507 containerd[1460]: time="2025-01-29T11:55:11.079477916Z" level=info msg="StopPodSandbox for \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\"" Jan 29 11:55:11.079617 containerd[1460]: time="2025-01-29T11:55:11.079602040Z" level=info msg="Ensure that sandbox 78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4 in task-service has been cleanup successfully" Jan 29 11:55:11.085239 kubelet[2494]: I0129 11:55:11.085192 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:11.085808 containerd[1460]: time="2025-01-29T11:55:11.085742680Z" level=info msg="StopPodSandbox for \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\"" Jan 29 11:55:11.086092 containerd[1460]: time="2025-01-29T11:55:11.086067470Z" level=info msg="Ensure that sandbox 12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8 in task-service has been cleanup successfully" Jan 29 11:55:11.088649 kubelet[2494]: I0129 11:55:11.087642 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:11.088780 containerd[1460]: time="2025-01-29T11:55:11.088185400Z" level=info msg="StopPodSandbox for \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\"" Jan 29 11:55:11.088780 containerd[1460]: time="2025-01-29T11:55:11.088374395Z" level=info msg="Ensure that sandbox 2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045 in task-service has been cleanup successfully" Jan 29 11:55:11.104950 kubelet[2494]: I0129 11:55:11.104859 2494 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:11.106562 containerd[1460]: time="2025-01-29T11:55:11.106116817Z" level=info msg="StopPodSandbox for \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\"" Jan 29 11:55:11.106562 containerd[1460]: time="2025-01-29T11:55:11.106320940Z" level=info msg="Ensure that sandbox 1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9 in task-service has been cleanup successfully" Jan 29 11:55:11.145578 containerd[1460]: time="2025-01-29T11:55:11.145513482Z" level=error msg="StopPodSandbox for \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\" failed" error="failed to destroy network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:11.145844 kubelet[2494]: E0129 11:55:11.145798 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:11.146297 kubelet[2494]: E0129 11:55:11.145874 2494 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4"} Jan 29 11:55:11.146297 kubelet[2494]: E0129 11:55:11.145956 2494 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3601942d-e4d5-4f58-9091-3f7871be8fee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:55:11.146297 kubelet[2494]: E0129 11:55:11.145982 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3601942d-e4d5-4f58-9091-3f7871be8fee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqqp5" podUID="3601942d-e4d5-4f58-9091-3f7871be8fee" Jan 29 11:55:11.156081 containerd[1460]: time="2025-01-29T11:55:11.156002903Z" level=error msg="StopPodSandbox for \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\" failed" error="failed to destroy network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:11.156583 kubelet[2494]: E0129 11:55:11.156519 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:11.156674 kubelet[2494]: E0129 11:55:11.156587 2494 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112"} Jan 29 11:55:11.156674 kubelet[2494]: E0129 11:55:11.156627 2494 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"130669b1-1d96-4e3f-83e0-176296743cad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:55:11.156674 kubelet[2494]: E0129 11:55:11.156652 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"130669b1-1d96-4e3f-83e0-176296743cad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7857f547f9-cj9n8" podUID="130669b1-1d96-4e3f-83e0-176296743cad" Jan 29 11:55:11.156899 kubelet[2494]: E0129 11:55:11.156867 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:11.156946 containerd[1460]: time="2025-01-29T11:55:11.156726994Z" level=error msg="StopPodSandbox for \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\" failed" error="failed to destroy network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:11.156989 kubelet[2494]: E0129 11:55:11.156892 2494 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045"} Jan 29 11:55:11.156989 kubelet[2494]: E0129 11:55:11.156950 2494 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd0ce080-79db-4e06-87fe-bc35e2d0e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:55:11.156989 kubelet[2494]: E0129 11:55:11.156966 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd0ce080-79db-4e06-87fe-bc35e2d0e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7857f547f9-l6br2" podUID="bd0ce080-79db-4e06-87fe-bc35e2d0e23b" Jan 29 11:55:11.157250 containerd[1460]: time="2025-01-29T11:55:11.157222936Z" level=error msg="StopPodSandbox for \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\" failed" error="failed to destroy network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:11.157359 kubelet[2494]: E0129 11:55:11.157329 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:11.157414 kubelet[2494]: E0129 11:55:11.157360 2494 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8"} Jan 29 11:55:11.157414 kubelet[2494]: E0129 11:55:11.157381 2494 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79fca267-9a26-4684-b71e-b7f100ade442\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:55:11.157414 kubelet[2494]: E0129 11:55:11.157397 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79fca267-9a26-4684-b71e-b7f100ade442\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sjsdr" podUID="79fca267-9a26-4684-b71e-b7f100ade442" Jan 29 11:55:11.159649 containerd[1460]: time="2025-01-29T11:55:11.159612737Z" level=error msg="StopPodSandbox for \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\" failed" error="failed to destroy network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:11.159937 kubelet[2494]: E0129 11:55:11.159903 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:11.159937 kubelet[2494]: E0129 11:55:11.159935 2494 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4"} Jan 29 11:55:11.160058 kubelet[2494]: E0129 11:55:11.159957 2494 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83ff0ee0-50a2-4a27-851e-d262c1a81765\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:55:11.160058 kubelet[2494]: E0129 11:55:11.159975 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83ff0ee0-50a2-4a27-851e-d262c1a81765\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d77bcc79-l7ddq" podUID="83ff0ee0-50a2-4a27-851e-d262c1a81765" Jan 29 11:55:11.164363 containerd[1460]: time="2025-01-29T11:55:11.164316486Z" level=error msg="StopPodSandbox for \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\" failed" error="failed to destroy network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:55:11.164490 kubelet[2494]: E0129 11:55:11.164466 2494 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:11.164535 kubelet[2494]: E0129 11:55:11.164494 2494 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9"} Jan 29 11:55:11.164535 kubelet[2494]: E0129 11:55:11.164515 2494 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cdf9998-9f22-47fa-be10-e527ac360095\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:55:11.164598 kubelet[2494]: E0129 11:55:11.164538 2494 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cdf9998-9f22-47fa-be10-e527ac360095\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q88sz" podUID="8cdf9998-9f22-47fa-be10-e527ac360095" Jan 29 11:55:11.630961 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:51096.service - OpenSSH per-connection server daemon (10.0.0.1:51096). Jan 29 11:55:11.678001 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 51096 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:11.679997 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:11.685383 systemd-logind[1438]: New session 8 of user core. Jan 29 11:55:11.695072 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:55:11.817219 sshd[3613]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:11.821325 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:51096.service: Deactivated successfully. Jan 29 11:55:11.823556 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:55:11.824268 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:55:11.825337 systemd-logind[1438]: Removed session 8. Jan 29 11:55:16.836243 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:51108.service - OpenSSH per-connection server daemon (10.0.0.1:51108). Jan 29 11:55:16.887731 sshd[3632]: Accepted publickey for core from 10.0.0.1 port 51108 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:16.890273 sshd[3632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:16.896364 systemd-logind[1438]: New session 9 of user core. Jan 29 11:55:16.904220 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:55:17.043501 sshd[3632]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:17.050897 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:51108.service: Deactivated successfully. Jan 29 11:55:17.054476 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:55:17.055475 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:55:17.057230 systemd-logind[1438]: Removed session 9. Jan 29 11:55:18.544111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541831312.mount: Deactivated successfully. Jan 29 11:55:19.638246 containerd[1460]: time="2025-01-29T11:55:19.638149355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:19.639395 containerd[1460]: time="2025-01-29T11:55:19.639340030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:55:19.640997 containerd[1460]: time="2025-01-29T11:55:19.640958228Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:19.646814 containerd[1460]: time="2025-01-29T11:55:19.645867767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.571542734s" Jan 29 11:55:19.646814 containerd[1460]: time="2025-01-29T11:55:19.645931837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:55:19.665384 containerd[1460]: time="2025-01-29T11:55:19.665210422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:19.665935 containerd[1460]: time="2025-01-29T11:55:19.665892192Z" level=info msg="CreateContainer within sandbox \"ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:55:19.688505 containerd[1460]: time="2025-01-29T11:55:19.688431440Z" level=info msg="CreateContainer within sandbox \"ce050a4027bc6a84e9480d5f26fc16d39a180ad767c66a17f6ce390c4280c4bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"babac0e9cc6682eba186e67545fb65c1ba5ee5aaaef85e437cffbe48c90c5b8e\"" Jan 29 11:55:19.689280 containerd[1460]: time="2025-01-29T11:55:19.689243685Z" level=info msg="StartContainer for \"babac0e9cc6682eba186e67545fb65c1ba5ee5aaaef85e437cffbe48c90c5b8e\"" Jan 29 11:55:19.773066 systemd[1]: Started cri-containerd-babac0e9cc6682eba186e67545fb65c1ba5ee5aaaef85e437cffbe48c90c5b8e.scope - libcontainer container babac0e9cc6682eba186e67545fb65c1ba5ee5aaaef85e437cffbe48c90c5b8e. Jan 29 11:55:20.102575 containerd[1460]: time="2025-01-29T11:55:20.102520432Z" level=info msg="StartContainer for \"babac0e9cc6682eba186e67545fb65c1ba5ee5aaaef85e437cffbe48c90c5b8e\" returns successfully" Jan 29 11:55:20.125639 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:55:20.125848 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jan 29 11:55:20.156601 kubelet[2494]: E0129 11:55:20.156239 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:20.173557 kubelet[2494]: I0129 11:55:20.172298 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-br8rn" podStartSLOduration=1.239407674 podStartE2EDuration="24.172274014s" podCreationTimestamp="2025-01-29 11:54:56 +0000 UTC" firstStartedPulling="2025-01-29 11:54:56.716435293 +0000 UTC m=+14.882112460" lastFinishedPulling="2025-01-29 11:55:19.649301643 +0000 UTC m=+37.814978800" observedRunningTime="2025-01-29 11:55:20.169597289 +0000 UTC m=+38.335274456" watchObservedRunningTime="2025-01-29 11:55:20.172274014 +0000 UTC m=+38.337951181" Jan 29 11:55:21.158148 kubelet[2494]: E0129 11:55:21.158105 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:21.926762 containerd[1460]: time="2025-01-29T11:55:21.926710622Z" level=info msg="StopPodSandbox for \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\"" Jan 29 11:55:22.054138 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:43700.service - OpenSSH per-connection server daemon (10.0.0.1:43700). Jan 29 11:55:22.186851 sshd[3783]: Accepted publickey for core from 10.0.0.1 port 43700 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:22.190639 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:22.206889 systemd-logind[1438]: New session 10 of user core. Jan 29 11:55:22.218079 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:55:22.428278 sshd[3783]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:22.431872 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:43700.service: Deactivated successfully. Jan 29 11:55:22.434213 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:55:22.435572 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:55:22.437427 systemd-logind[1438]: Removed session 10. Jan 29 11:55:22.439826 kernel: bpftool[3933]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:55:22.677280 systemd-networkd[1376]: vxlan.calico: Link UP Jan 29 11:55:22.677290 systemd-networkd[1376]: vxlan.calico: Gained carrier Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.395 [INFO][3781] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.396 [INFO][3781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" iface="eth0" netns="/var/run/netns/cni-c24a6396-67f6-50de-530d-afd68c09b49f" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.396 [INFO][3781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" iface="eth0" netns="/var/run/netns/cni-c24a6396-67f6-50de-530d-afd68c09b49f" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.404 [INFO][3781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" iface="eth0" netns="/var/run/netns/cni-c24a6396-67f6-50de-530d-afd68c09b49f" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.404 [INFO][3781] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.404 [INFO][3781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.766 [INFO][3922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.766 [INFO][3922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.767 [INFO][3922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.800 [WARNING][3922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.800 [INFO][3922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.878 [INFO][3922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:22.883926 containerd[1460]: 2025-01-29 11:55:22.881 [INFO][3781] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:22.884652 containerd[1460]: time="2025-01-29T11:55:22.884207060Z" level=info msg="TearDown network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\" successfully" Jan 29 11:55:22.884652 containerd[1460]: time="2025-01-29T11:55:22.884245492Z" level=info msg="StopPodSandbox for \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\" returns successfully" Jan 29 11:55:22.888853 systemd[1]: run-netns-cni\x2dc24a6396\x2d67f6\x2d50de\x2d530d\x2dafd68c09b49f.mount: Deactivated successfully. Jan 29 11:55:22.890151 containerd[1460]: time="2025-01-29T11:55:22.889872194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d77bcc79-l7ddq,Uid:83ff0ee0-50a2-4a27-851e-d262c1a81765,Namespace:calico-system,Attempt:1,}" Jan 29 11:55:22.926604 containerd[1460]: time="2025-01-29T11:55:22.926560631Z" level=info msg="StopPodSandbox for \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\"" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.112 [INFO][4007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.112 [INFO][4007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" iface="eth0" netns="/var/run/netns/cni-8b1628c0-1ceb-29a1-ed6f-3c0bedd67873" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.112 [INFO][4007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" iface="eth0" netns="/var/run/netns/cni-8b1628c0-1ceb-29a1-ed6f-3c0bedd67873" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.113 [INFO][4007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" iface="eth0" netns="/var/run/netns/cni-8b1628c0-1ceb-29a1-ed6f-3c0bedd67873" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.113 [INFO][4007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.113 [INFO][4007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.149 [INFO][4034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.149 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.149 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.254 [WARNING][4034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.254 [INFO][4034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.256 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:23.262426 containerd[1460]: 2025-01-29 11:55:23.259 [INFO][4007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:23.263231 containerd[1460]: time="2025-01-29T11:55:23.262605137Z" level=info msg="TearDown network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\" successfully" Jan 29 11:55:23.263231 containerd[1460]: time="2025-01-29T11:55:23.262639261Z" level=info msg="StopPodSandbox for \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\" returns successfully" Jan 29 11:55:23.264162 containerd[1460]: time="2025-01-29T11:55:23.264100033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7857f547f9-l6br2,Uid:bd0ce080-79db-4e06-87fe-bc35e2d0e23b,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:55:23.265323 systemd[1]: run-netns-cni\x2d8b1628c0\x2d1ceb\x2d29a1\x2ded6f\x2d3c0bedd67873.mount: Deactivated successfully. Jan 29 11:55:23.682932 systemd-networkd[1376]: calid9b2952a2c6: Link UP Jan 29 11:55:23.684710 systemd-networkd[1376]: calid9b2952a2c6: Gained carrier Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.517 [INFO][4043] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0 calico-kube-controllers-d77bcc79- calico-system 83ff0ee0-50a2-4a27-851e-d262c1a81765 872 0 2025-01-29 11:54:56 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d77bcc79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d77bcc79-l7ddq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid9b2952a2c6 [] []}} ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Namespace="calico-system" Pod="calico-kube-controllers-d77bcc79-l7ddq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.517 [INFO][4043] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Namespace="calico-system" Pod="calico-kube-controllers-d77bcc79-l7ddq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.571 [INFO][4068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" HandleID="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.583 [INFO][4068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" HandleID="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027fe40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d77bcc79-l7ddq", "timestamp":"2025-01-29 11:55:23.57187567 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.583 [INFO][4068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.583 [INFO][4068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.583 [INFO][4068] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.586 [INFO][4068] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.592 [INFO][4068] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.598 [INFO][4068] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.600 [INFO][4068] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.603 [INFO][4068] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.604 [INFO][4068] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.607 [INFO][4068] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.626 [INFO][4068] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.673 [INFO][4068] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.673 [INFO][4068] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" host="localhost" Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.673 [INFO][4068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:23.703704 containerd[1460]: 2025-01-29 11:55:23.673 [INFO][4068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" HandleID="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:23.704501 containerd[1460]: 2025-01-29 11:55:23.679 [INFO][4043] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Namespace="calico-system" Pod="calico-kube-controllers-d77bcc79-l7ddq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0", GenerateName:"calico-kube-controllers-d77bcc79-", Namespace:"calico-system", SelfLink:"", UID:"83ff0ee0-50a2-4a27-851e-d262c1a81765", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d77bcc79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d77bcc79-l7ddq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9b2952a2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:23.704501 containerd[1460]: 2025-01-29 11:55:23.679 [INFO][4043] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Namespace="calico-system" Pod="calico-kube-controllers-d77bcc79-l7ddq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:23.704501 containerd[1460]: 2025-01-29 11:55:23.679 [INFO][4043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9b2952a2c6 ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Namespace="calico-system" Pod="calico-kube-controllers-d77bcc79-l7ddq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:23.704501 containerd[1460]: 2025-01-29 11:55:23.685 [INFO][4043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Namespace="calico-system" Pod="calico-kube-controllers-d77bcc79-l7ddq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:23.704501 containerd[1460]: 2025-01-29 11:55:23.685 [INFO][4043] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Namespace="calico-system" Pod="calico-kube-controllers-d77bcc79-l7ddq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0", GenerateName:"calico-kube-controllers-d77bcc79-", Namespace:"calico-system", SelfLink:"", UID:"83ff0ee0-50a2-4a27-851e-d262c1a81765", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d77bcc79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e", Pod:"calico-kube-controllers-d77bcc79-l7ddq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9b2952a2c6", MAC:"f6:35:5a:12:f6:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:23.704501 containerd[1460]: 2025-01-29 11:55:23.698 [INFO][4043] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Namespace="calico-system" Pod="calico-kube-controllers-d77bcc79-l7ddq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:23.747271 systemd-networkd[1376]: calicfc2f320acc: Link UP Jan 29 11:55:23.748205 systemd-networkd[1376]: calicfc2f320acc: Gained carrier Jan 29 11:55:23.753088 containerd[1460]: time="2025-01-29T11:55:23.752438305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:23.753500 containerd[1460]: time="2025-01-29T11:55:23.752982276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:23.753500 containerd[1460]: time="2025-01-29T11:55:23.753006692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:23.753909 containerd[1460]: time="2025-01-29T11:55:23.753746360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.563 [INFO][4057] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0 calico-apiserver-7857f547f9- calico-apiserver bd0ce080-79db-4e06-87fe-bc35e2d0e23b 877 0 2025-01-29 11:54:56 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7857f547f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7857f547f9-l6br2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicfc2f320acc [] []}} ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-l6br2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--l6br2-" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.563 [INFO][4057] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-l6br2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.602 [INFO][4077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" HandleID="k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.682 [INFO][4077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" HandleID="k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc4f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7857f547f9-l6br2", "timestamp":"2025-01-29 11:55:23.602556395 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.682 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.682 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.682 [INFO][4077] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.687 [INFO][4077] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.692 [INFO][4077] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.704 [INFO][4077] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.707 [INFO][4077] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.710 [INFO][4077] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.710 [INFO][4077] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.714 [INFO][4077] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0 Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.725 [INFO][4077] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.737 [INFO][4077] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.737 [INFO][4077] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" host="localhost" Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.737 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:23.769348 containerd[1460]: 2025-01-29 11:55:23.737 [INFO][4077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" HandleID="k8s-pod-network.4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.771041 containerd[1460]: 2025-01-29 11:55:23.743 [INFO][4057] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-l6br2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0", GenerateName:"calico-apiserver-7857f547f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd0ce080-79db-4e06-87fe-bc35e2d0e23b", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7857f547f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7857f547f9-l6br2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfc2f320acc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:23.771041 containerd[1460]: 2025-01-29 11:55:23.744 [INFO][4057] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-l6br2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.771041 containerd[1460]: 2025-01-29 11:55:23.744 [INFO][4057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfc2f320acc ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-l6br2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.771041 containerd[1460]: 2025-01-29 11:55:23.747 [INFO][4057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-l6br2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.771041 containerd[1460]: 2025-01-29 11:55:23.748 [INFO][4057] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-l6br2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0", GenerateName:"calico-apiserver-7857f547f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd0ce080-79db-4e06-87fe-bc35e2d0e23b", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7857f547f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0", Pod:"calico-apiserver-7857f547f9-l6br2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfc2f320acc", MAC:"ea:68:d6:ad:88:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:23.771041 containerd[1460]: 2025-01-29 11:55:23.762 [INFO][4057] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-l6br2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:23.790030 systemd[1]: Started cri-containerd-b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e.scope - libcontainer container b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e. Jan 29 11:55:23.800609 containerd[1460]: time="2025-01-29T11:55:23.800358489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:23.801012 containerd[1460]: time="2025-01-29T11:55:23.800536713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:23.801012 containerd[1460]: time="2025-01-29T11:55:23.800552953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:23.801234 containerd[1460]: time="2025-01-29T11:55:23.801141399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:23.808012 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:23.823152 systemd[1]: Started cri-containerd-4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0.scope - libcontainer container 4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0. Jan 29 11:55:23.837413 containerd[1460]: time="2025-01-29T11:55:23.837333258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d77bcc79-l7ddq,Uid:83ff0ee0-50a2-4a27-851e-d262c1a81765,Namespace:calico-system,Attempt:1,} returns sandbox id \"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e\"" Jan 29 11:55:23.840475 containerd[1460]: time="2025-01-29T11:55:23.840325655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:55:23.843487 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:23.876626 containerd[1460]: time="2025-01-29T11:55:23.876578359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7857f547f9-l6br2,Uid:bd0ce080-79db-4e06-87fe-bc35e2d0e23b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0\"" Jan 29 11:55:23.927872 containerd[1460]: time="2025-01-29T11:55:23.926602912Z" level=info msg="StopPodSandbox for \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\"" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:23.977 [INFO][4208] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:23.978 [INFO][4208] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" iface="eth0" netns="/var/run/netns/cni-9e281284-181b-f6a7-534d-044bcd26034c" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:23.978 [INFO][4208] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" iface="eth0" netns="/var/run/netns/cni-9e281284-181b-f6a7-534d-044bcd26034c" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:23.979 [INFO][4208] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" iface="eth0" netns="/var/run/netns/cni-9e281284-181b-f6a7-534d-044bcd26034c" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:23.979 [INFO][4208] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:23.979 [INFO][4208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:24.002 [INFO][4216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:24.003 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:24.003 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:24.011 [WARNING][4216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:24.011 [INFO][4216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:24.013 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:24.020005 containerd[1460]: 2025-01-29 11:55:24.016 [INFO][4208] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:24.020460 containerd[1460]: time="2025-01-29T11:55:24.020251694Z" level=info msg="TearDown network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\" successfully" Jan 29 11:55:24.020460 containerd[1460]: time="2025-01-29T11:55:24.020288082Z" level=info msg="StopPodSandbox for \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\" returns successfully" Jan 29 11:55:24.020780 kubelet[2494]: E0129 11:55:24.020726 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:24.022466 containerd[1460]: time="2025-01-29T11:55:24.022013310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q88sz,Uid:8cdf9998-9f22-47fa-be10-e527ac360095,Namespace:kube-system,Attempt:1,}" Jan 29 11:55:24.024232 systemd[1]: run-netns-cni\x2d9e281284\x2d181b\x2df6a7\x2d534d\x2d044bcd26034c.mount: Deactivated successfully. Jan 29 11:55:24.337094 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jan 29 11:55:24.731224 systemd-networkd[1376]: cali0b82823d164: Link UP Jan 29 11:55:24.732096 systemd-networkd[1376]: cali0b82823d164: Gained carrier Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.426 [INFO][4227] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--q88sz-eth0 coredns-668d6bf9bc- kube-system 8cdf9998-9f22-47fa-be10-e527ac360095 889 0 2025-01-29 11:54:46 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-q88sz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0b82823d164 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q88sz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q88sz-" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.426 [INFO][4227] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q88sz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.462 [INFO][4241] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" HandleID="k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.471 [INFO][4241] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" HandleID="k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002940b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-q88sz", "timestamp":"2025-01-29 11:55:24.462558844 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.471 [INFO][4241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.471 [INFO][4241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.471 [INFO][4241] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.473 [INFO][4241] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.477 [INFO][4241] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.481 [INFO][4241] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.483 [INFO][4241] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.485 [INFO][4241] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.485 [INFO][4241] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.486 [INFO][4241] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8 Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.695 [INFO][4241] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.722 [INFO][4241] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.723 [INFO][4241] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" host="localhost" Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.723 [INFO][4241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:24.881464 containerd[1460]: 2025-01-29 11:55:24.723 [INFO][4241] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" HandleID="k8s-pod-network.699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.882675 containerd[1460]: 2025-01-29 11:55:24.726 [INFO][4227] cni-plugin/k8s.go 386: Populated endpoint ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q88sz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--q88sz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cdf9998-9f22-47fa-be10-e527ac360095", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-q88sz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b82823d164", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:24.882675 containerd[1460]: 2025-01-29 11:55:24.726 [INFO][4227] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q88sz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.882675 containerd[1460]: 2025-01-29 11:55:24.726 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b82823d164 ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q88sz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.882675 containerd[1460]: 2025-01-29 11:55:24.732 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q88sz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.882675 containerd[1460]: 2025-01-29 11:55:24.733 [INFO][4227] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q88sz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--q88sz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cdf9998-9f22-47fa-be10-e527ac360095", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8", Pod:"coredns-668d6bf9bc-q88sz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b82823d164", MAC:"fa:aa:67:23:52:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:24.882675 containerd[1460]: 2025-01-29 11:55:24.877 [INFO][4227] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q88sz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:24.912959 systemd-networkd[1376]: calid9b2952a2c6: Gained IPv6LL Jan 29 11:55:24.926754 containerd[1460]: time="2025-01-29T11:55:24.926672122Z" level=info msg="StopPodSandbox for \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\"" Jan 29 11:55:25.056695 containerd[1460]: time="2025-01-29T11:55:25.056219953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:25.056695 containerd[1460]: time="2025-01-29T11:55:25.056309912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:25.056695 containerd[1460]: time="2025-01-29T11:55:25.056329569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:25.056695 containerd[1460]: time="2025-01-29T11:55:25.056431851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.031 [INFO][4279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.031 [INFO][4279] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" iface="eth0" netns="/var/run/netns/cni-9dff2db7-f871-610f-d835-3e74e2b0e6cb" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.032 [INFO][4279] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" iface="eth0" netns="/var/run/netns/cni-9dff2db7-f871-610f-d835-3e74e2b0e6cb" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.032 [INFO][4279] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" iface="eth0" netns="/var/run/netns/cni-9dff2db7-f871-610f-d835-3e74e2b0e6cb" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.032 [INFO][4279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.032 [INFO][4279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.056 [INFO][4286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.057 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.057 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.068 [WARNING][4286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.068 [INFO][4286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.071 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:25.077799 containerd[1460]: 2025-01-29 11:55:25.074 [INFO][4279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:25.078371 containerd[1460]: time="2025-01-29T11:55:25.078054219Z" level=info msg="TearDown network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\" successfully" Jan 29 11:55:25.078371 containerd[1460]: time="2025-01-29T11:55:25.078095857Z" level=info msg="StopPodSandbox for \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\" returns successfully" Jan 29 11:55:25.079213 containerd[1460]: time="2025-01-29T11:55:25.079160395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqqp5,Uid:3601942d-e4d5-4f58-9091-3f7871be8fee,Namespace:calico-system,Attempt:1,}" Jan 29 11:55:25.083158 systemd[1]: Started cri-containerd-699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8.scope - libcontainer container 699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8. Jan 29 11:55:25.087190 systemd[1]: run-netns-cni\x2d9dff2db7\x2df871\x2d610f\x2dd835\x2d3e74e2b0e6cb.mount: Deactivated successfully. Jan 29 11:55:25.100318 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:25.131442 containerd[1460]: time="2025-01-29T11:55:25.131375548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q88sz,Uid:8cdf9998-9f22-47fa-be10-e527ac360095,Namespace:kube-system,Attempt:1,} returns sandbox id \"699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8\"" Jan 29 11:55:25.133739 kubelet[2494]: E0129 11:55:25.133264 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:25.136572 containerd[1460]: time="2025-01-29T11:55:25.136537415Z" level=info msg="CreateContainer within sandbox \"699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:55:25.170571 containerd[1460]: time="2025-01-29T11:55:25.170497618Z" level=info msg="CreateContainer within sandbox \"699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32944ac547b0d805739c8755682f7ac3755aefa74cfe84c9ffb05c226b1b263a\"" Jan 29 11:55:25.171525 containerd[1460]: time="2025-01-29T11:55:25.171488058Z" level=info msg="StartContainer for \"32944ac547b0d805739c8755682f7ac3755aefa74cfe84c9ffb05c226b1b263a\"" Jan 29 11:55:25.205125 systemd[1]: Started cri-containerd-32944ac547b0d805739c8755682f7ac3755aefa74cfe84c9ffb05c226b1b263a.scope - libcontainer container 32944ac547b0d805739c8755682f7ac3755aefa74cfe84c9ffb05c226b1b263a. Jan 29 11:55:25.275306 systemd-networkd[1376]: calia5c81f92406: Link UP Jan 29 11:55:25.276535 systemd-networkd[1376]: calia5c81f92406: Gained carrier Jan 29 11:55:25.292280 containerd[1460]: time="2025-01-29T11:55:25.292229964Z" level=info msg="StartContainer for \"32944ac547b0d805739c8755682f7ac3755aefa74cfe84c9ffb05c226b1b263a\" returns successfully" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.147 [INFO][4329] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rqqp5-eth0 csi-node-driver- calico-system 3601942d-e4d5-4f58-9091-3f7871be8fee 896 0 2025-01-29 11:54:56 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rqqp5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia5c81f92406 [] []}} ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Namespace="calico-system" Pod="csi-node-driver-rqqp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rqqp5-" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.147 [INFO][4329] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Namespace="calico-system" Pod="csi-node-driver-rqqp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.184 [INFO][4346] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" HandleID="k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.194 [INFO][4346] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" HandleID="k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000504a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rqqp5", "timestamp":"2025-01-29 11:55:25.184058889 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.194 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.195 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.195 [INFO][4346] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.199 [INFO][4346] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.205 [INFO][4346] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.212 [INFO][4346] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.214 [INFO][4346] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.217 [INFO][4346] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.217 [INFO][4346] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.219 [INFO][4346] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834 Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.226 [INFO][4346] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.267 [INFO][4346] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.267 [INFO][4346] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" host="localhost" Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.268 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:25.295037 containerd[1460]: 2025-01-29 11:55:25.268 [INFO][4346] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" HandleID="k8s-pod-network.fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.295969 containerd[1460]: 2025-01-29 11:55:25.271 [INFO][4329] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Namespace="calico-system" Pod="csi-node-driver-rqqp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rqqp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rqqp5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3601942d-e4d5-4f58-9091-3f7871be8fee", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rqqp5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5c81f92406", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:25.295969 containerd[1460]: 2025-01-29 11:55:25.271 [INFO][4329] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Namespace="calico-system" Pod="csi-node-driver-rqqp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.295969 containerd[1460]: 2025-01-29 11:55:25.271 [INFO][4329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5c81f92406 ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Namespace="calico-system" Pod="csi-node-driver-rqqp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.295969 containerd[1460]: 2025-01-29 11:55:25.277 [INFO][4329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Namespace="calico-system" Pod="csi-node-driver-rqqp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.295969 containerd[1460]: 2025-01-29 11:55:25.279 [INFO][4329] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Namespace="calico-system" Pod="csi-node-driver-rqqp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rqqp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rqqp5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3601942d-e4d5-4f58-9091-3f7871be8fee", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834", Pod:"csi-node-driver-rqqp5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5c81f92406", MAC:"96:7f:68:44:8b:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:25.295969 containerd[1460]: 2025-01-29 11:55:25.290 [INFO][4329] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834" Namespace="calico-system" Pod="csi-node-driver-rqqp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:25.319405 containerd[1460]: time="2025-01-29T11:55:25.319164212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:25.320267 containerd[1460]: time="2025-01-29T11:55:25.320108154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:25.320267 containerd[1460]: time="2025-01-29T11:55:25.320132349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:25.320475 containerd[1460]: time="2025-01-29T11:55:25.320227758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:25.348038 systemd[1]: Started cri-containerd-fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834.scope - libcontainer container fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834. Jan 29 11:55:25.362038 systemd-networkd[1376]: calicfc2f320acc: Gained IPv6LL Jan 29 11:55:25.364590 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:25.379880 containerd[1460]: time="2025-01-29T11:55:25.379825065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqqp5,Uid:3601942d-e4d5-4f58-9091-3f7871be8fee,Namespace:calico-system,Attempt:1,} returns sandbox id \"fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834\"" Jan 29 11:55:25.926692 containerd[1460]: time="2025-01-29T11:55:25.926452725Z" level=info msg="StopPodSandbox for \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\"" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:25.989 [INFO][4467] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:25.989 [INFO][4467] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" iface="eth0" netns="/var/run/netns/cni-b7a68113-d5a2-78b6-6b07-e71711c11148" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:25.989 [INFO][4467] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" iface="eth0" netns="/var/run/netns/cni-b7a68113-d5a2-78b6-6b07-e71711c11148" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:25.990 [INFO][4467] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" iface="eth0" netns="/var/run/netns/cni-b7a68113-d5a2-78b6-6b07-e71711c11148" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:25.990 [INFO][4467] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:25.990 [INFO][4467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:26.017 [INFO][4474] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:26.017 [INFO][4474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:26.017 [INFO][4474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:26.023 [WARNING][4474] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:26.023 [INFO][4474] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:26.025 [INFO][4474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:26.030513 containerd[1460]: 2025-01-29 11:55:26.028 [INFO][4467] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:26.031009 containerd[1460]: time="2025-01-29T11:55:26.030706134Z" level=info msg="TearDown network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\" successfully" Jan 29 11:55:26.031009 containerd[1460]: time="2025-01-29T11:55:26.030747903Z" level=info msg="StopPodSandbox for \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\" returns successfully" Jan 29 11:55:26.031532 containerd[1460]: time="2025-01-29T11:55:26.031489374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7857f547f9-cj9n8,Uid:130669b1-1d96-4e3f-83e0-176296743cad,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:55:26.069643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067668809.mount: Deactivated successfully. Jan 29 11:55:26.069842 systemd[1]: run-netns-cni\x2db7a68113\x2dd5a2\x2d78b6\x2d6b07\x2de71711c11148.mount: Deactivated successfully. Jan 29 11:55:26.164063 systemd-networkd[1376]: caliae0286e260f: Link UP Jan 29 11:55:26.164952 systemd-networkd[1376]: caliae0286e260f: Gained carrier Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.083 [INFO][4482] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0 calico-apiserver-7857f547f9- calico-apiserver 130669b1-1d96-4e3f-83e0-176296743cad 909 0 2025-01-29 11:54:56 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7857f547f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7857f547f9-cj9n8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae0286e260f [] []}} ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-cj9n8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.083 [INFO][4482] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-cj9n8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.113 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" HandleID="k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.122 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" HandleID="k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ded50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7857f547f9-cj9n8", "timestamp":"2025-01-29 11:55:26.113762899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.122 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.122 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.122 [INFO][4496] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.124 [INFO][4496] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.132 [INFO][4496] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.137 [INFO][4496] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.139 [INFO][4496] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.141 [INFO][4496] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.141 [INFO][4496] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.143 [INFO][4496] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858 Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.147 [INFO][4496] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.157 [INFO][4496] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.157 [INFO][4496] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" host="localhost" Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.157 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:26.177659 containerd[1460]: 2025-01-29 11:55:26.157 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" HandleID="k8s-pod-network.635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.178590 containerd[1460]: 2025-01-29 11:55:26.161 [INFO][4482] cni-plugin/k8s.go 386: Populated endpoint ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-cj9n8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0", GenerateName:"calico-apiserver-7857f547f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"130669b1-1d96-4e3f-83e0-176296743cad", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7857f547f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7857f547f9-cj9n8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae0286e260f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:26.178590 containerd[1460]: 2025-01-29 11:55:26.161 [INFO][4482] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-cj9n8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.178590 containerd[1460]: 2025-01-29 11:55:26.161 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae0286e260f ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-cj9n8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.178590 containerd[1460]: 2025-01-29 11:55:26.163 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-cj9n8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.178590 containerd[1460]: 2025-01-29 11:55:26.163 [INFO][4482] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-cj9n8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0", GenerateName:"calico-apiserver-7857f547f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"130669b1-1d96-4e3f-83e0-176296743cad", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7857f547f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858", Pod:"calico-apiserver-7857f547f9-cj9n8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae0286e260f", MAC:"7e:86:3d:1b:50:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:26.178590 containerd[1460]: 2025-01-29 11:55:26.174 [INFO][4482] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858" Namespace="calico-apiserver" Pod="calico-apiserver-7857f547f9-cj9n8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:26.199700 containerd[1460]: time="2025-01-29T11:55:26.199443387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:26.199700 containerd[1460]: time="2025-01-29T11:55:26.199520642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:26.199700 containerd[1460]: time="2025-01-29T11:55:26.199540389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:26.199937 containerd[1460]: time="2025-01-29T11:55:26.199724504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:26.238011 systemd[1]: Started cri-containerd-635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858.scope - libcontainer container 635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858. Jan 29 11:55:26.246650 kubelet[2494]: E0129 11:55:26.244117 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:26.255369 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:26.266116 kubelet[2494]: I0129 11:55:26.265639 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q88sz" podStartSLOduration=40.265579929 podStartE2EDuration="40.265579929s" podCreationTimestamp="2025-01-29 11:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:55:26.262338686 +0000 UTC m=+44.428015863" watchObservedRunningTime="2025-01-29 11:55:26.265579929 +0000 UTC m=+44.431257096" Jan 29 11:55:26.292685 containerd[1460]: time="2025-01-29T11:55:26.292620453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7857f547f9-cj9n8,Uid:130669b1-1d96-4e3f-83e0-176296743cad,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858\"" Jan 29 11:55:26.577292 systemd-networkd[1376]: cali0b82823d164: Gained IPv6LL Jan 29 11:55:26.927256 containerd[1460]: time="2025-01-29T11:55:26.926783282Z" level=info msg="StopPodSandbox for \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\"" Jan 29 11:55:26.961886 systemd-networkd[1376]: calia5c81f92406: Gained IPv6LL Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:26.986 [INFO][4580] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:26.986 [INFO][4580] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" iface="eth0" netns="/var/run/netns/cni-5880b07f-cc73-3798-ffcd-7ccaae532834" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:26.987 [INFO][4580] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" iface="eth0" netns="/var/run/netns/cni-5880b07f-cc73-3798-ffcd-7ccaae532834" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:26.987 [INFO][4580] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" iface="eth0" netns="/var/run/netns/cni-5880b07f-cc73-3798-ffcd-7ccaae532834" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:26.987 [INFO][4580] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:26.987 [INFO][4580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:27.013 [INFO][4588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:27.013 [INFO][4588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:27.013 [INFO][4588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:27.021 [WARNING][4588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:27.021 [INFO][4588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:27.022 [INFO][4588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:27.029120 containerd[1460]: 2025-01-29 11:55:27.025 [INFO][4580] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:27.029870 containerd[1460]: time="2025-01-29T11:55:27.029807575Z" level=info msg="TearDown network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\" successfully" Jan 29 11:55:27.029941 containerd[1460]: time="2025-01-29T11:55:27.029927490Z" level=info msg="StopPodSandbox for \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\" returns successfully" Jan 29 11:55:27.030376 kubelet[2494]: E0129 11:55:27.030339 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:27.031148 containerd[1460]: time="2025-01-29T11:55:27.031125889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjsdr,Uid:79fca267-9a26-4684-b71e-b7f100ade442,Namespace:kube-system,Attempt:1,}" Jan 29 11:55:27.034616 systemd[1]: run-netns-cni\x2d5880b07f\x2dcc73\x2d3798\x2dffcd\x2d7ccaae532834.mount: Deactivated successfully. Jan 29 11:55:27.180033 containerd[1460]: time="2025-01-29T11:55:27.179872652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:27.181664 containerd[1460]: time="2025-01-29T11:55:27.181617587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:55:27.183585 containerd[1460]: time="2025-01-29T11:55:27.183537339Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:27.186193 containerd[1460]: time="2025-01-29T11:55:27.186125646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:27.186943 containerd[1460]: time="2025-01-29T11:55:27.186904277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.346536714s" Jan 29 11:55:27.186943 containerd[1460]: time="2025-01-29T11:55:27.186938061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:55:27.188825 containerd[1460]: time="2025-01-29T11:55:27.188728230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:55:27.200631 containerd[1460]: time="2025-01-29T11:55:27.200404300Z" level=info msg="CreateContainer within sandbox \"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:55:27.200653 systemd-networkd[1376]: cali1d833553147: Link UP Jan 29 11:55:27.201866 systemd-networkd[1376]: cali1d833553147: Gained carrier Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.119 [INFO][4597] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0 coredns-668d6bf9bc- kube-system 79fca267-9a26-4684-b71e-b7f100ade442 929 0 2025-01-29 11:54:46 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-sjsdr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d833553147 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjsdr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sjsdr-" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.120 [INFO][4597] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjsdr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.150 [INFO][4612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" HandleID="k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.160 [INFO][4612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" HandleID="k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000407540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-sjsdr", "timestamp":"2025-01-29 11:55:27.150320418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.160 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.160 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.160 [INFO][4612] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.162 [INFO][4612] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.165 [INFO][4612] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.170 [INFO][4612] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.172 [INFO][4612] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.174 [INFO][4612] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.174 [INFO][4612] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.176 [INFO][4612] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6 Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.183 [INFO][4612] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.190 [INFO][4612] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.190 [INFO][4612] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" host="localhost" Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.190 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:27.221813 containerd[1460]: 2025-01-29 11:55:27.190 [INFO][4612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" HandleID="k8s-pod-network.fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.222389 containerd[1460]: 2025-01-29 11:55:27.196 [INFO][4597] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjsdr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"79fca267-9a26-4684-b71e-b7f100ade442", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-sjsdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d833553147", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:27.222389 containerd[1460]: 2025-01-29 11:55:27.196 [INFO][4597] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjsdr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.222389 containerd[1460]: 2025-01-29 11:55:27.196 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d833553147 ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjsdr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.222389 containerd[1460]: 2025-01-29 11:55:27.202 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjsdr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.222389 containerd[1460]: 2025-01-29 11:55:27.202 [INFO][4597] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjsdr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"79fca267-9a26-4684-b71e-b7f100ade442", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6", Pod:"coredns-668d6bf9bc-sjsdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d833553147", MAC:"fe:1a:86:fc:c5:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:27.222389 containerd[1460]: 2025-01-29 11:55:27.214 [INFO][4597] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-sjsdr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:27.225164 containerd[1460]: time="2025-01-29T11:55:27.225112334Z" level=info msg="CreateContainer within sandbox \"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\"" Jan 29 11:55:27.226517 containerd[1460]: time="2025-01-29T11:55:27.226069731Z" level=info msg="StartContainer for \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\"" Jan 29 11:55:27.256837 kubelet[2494]: E0129 11:55:27.256542 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:27.260203 containerd[1460]: time="2025-01-29T11:55:27.255000389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:27.260203 containerd[1460]: time="2025-01-29T11:55:27.256314134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:27.260203 containerd[1460]: time="2025-01-29T11:55:27.256332399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:27.260203 containerd[1460]: time="2025-01-29T11:55:27.256423760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:27.260227 systemd[1]: Started cri-containerd-19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0.scope - libcontainer container 19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0. Jan 29 11:55:27.298236 systemd[1]: Started cri-containerd-fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6.scope - libcontainer container fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6. Jan 29 11:55:27.312291 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:27.343085 containerd[1460]: time="2025-01-29T11:55:27.343039795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjsdr,Uid:79fca267-9a26-4684-b71e-b7f100ade442,Namespace:kube-system,Attempt:1,} returns sandbox id \"fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6\"" Jan 29 11:55:27.344097 kubelet[2494]: E0129 11:55:27.343918 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:27.345645 containerd[1460]: time="2025-01-29T11:55:27.345621760Z" level=info msg="CreateContainer within sandbox \"fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:55:27.408981 systemd-networkd[1376]: caliae0286e260f: Gained IPv6LL Jan 29 11:55:27.445201 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:43704.service - OpenSSH per-connection server daemon (10.0.0.1:43704). Jan 29 11:55:27.537719 containerd[1460]: time="2025-01-29T11:55:27.537647156Z" level=info msg="StartContainer for \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\" returns successfully" Jan 29 11:55:27.584718 sshd[4715]: Accepted publickey for core from 10.0.0.1 port 43704 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:27.587095 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:27.602281 systemd-logind[1438]: New session 11 of user core. Jan 29 11:55:27.604873 containerd[1460]: time="2025-01-29T11:55:27.604817090Z" level=info msg="CreateContainer within sandbox \"fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52f8d07787e14d40e9a759f39ba4fd3b9965849f447d85ad1b1630e15ef3f144\"" Jan 29 11:55:27.605669 containerd[1460]: time="2025-01-29T11:55:27.605546509Z" level=info msg="StartContainer for \"52f8d07787e14d40e9a759f39ba4fd3b9965849f447d85ad1b1630e15ef3f144\"" Jan 29 11:55:27.608381 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:55:27.638070 systemd[1]: Started cri-containerd-52f8d07787e14d40e9a759f39ba4fd3b9965849f447d85ad1b1630e15ef3f144.scope - libcontainer container 52f8d07787e14d40e9a759f39ba4fd3b9965849f447d85ad1b1630e15ef3f144. Jan 29 11:55:27.720960 containerd[1460]: time="2025-01-29T11:55:27.720706857Z" level=info msg="StartContainer for \"52f8d07787e14d40e9a759f39ba4fd3b9965849f447d85ad1b1630e15ef3f144\" returns successfully" Jan 29 11:55:27.780988 sshd[4715]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:27.785605 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:43704.service: Deactivated successfully. Jan 29 11:55:27.788106 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:55:27.788986 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:55:27.790145 systemd-logind[1438]: Removed session 11. Jan 29 11:55:28.263450 kubelet[2494]: E0129 11:55:28.263406 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:28.264090 kubelet[2494]: E0129 11:55:28.263671 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:28.425364 kubelet[2494]: I0129 11:55:28.424778 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d77bcc79-l7ddq" podStartSLOduration=29.076244547 podStartE2EDuration="32.424756437s" podCreationTimestamp="2025-01-29 11:54:56 +0000 UTC" firstStartedPulling="2025-01-29 11:55:23.839494004 +0000 UTC m=+42.005171171" lastFinishedPulling="2025-01-29 11:55:27.188005894 +0000 UTC m=+45.353683061" observedRunningTime="2025-01-29 11:55:28.39365659 +0000 UTC m=+46.559333757" watchObservedRunningTime="2025-01-29 11:55:28.424756437 +0000 UTC m=+46.590433604" Jan 29 11:55:28.560020 kubelet[2494]: I0129 11:55:28.559768 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sjsdr" podStartSLOduration=42.559737747 podStartE2EDuration="42.559737747s" podCreationTimestamp="2025-01-29 11:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:55:28.55689344 +0000 UTC m=+46.722570607" watchObservedRunningTime="2025-01-29 11:55:28.559737747 +0000 UTC m=+46.725414914" Jan 29 11:55:28.946053 systemd-networkd[1376]: cali1d833553147: Gained IPv6LL Jan 29 11:55:29.265343 kubelet[2494]: E0129 11:55:29.265313 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:30.141504 containerd[1460]: time="2025-01-29T11:55:30.141430232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:30.142439 containerd[1460]: time="2025-01-29T11:55:30.142339207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:55:30.144597 containerd[1460]: time="2025-01-29T11:55:30.144553873Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:30.147460 containerd[1460]: time="2025-01-29T11:55:30.147380477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:30.148201 containerd[1460]: time="2025-01-29T11:55:30.148160340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.959395631s" Jan 29 11:55:30.148201 containerd[1460]: time="2025-01-29T11:55:30.148198592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:55:30.149579 containerd[1460]: time="2025-01-29T11:55:30.149389365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:55:30.151285 containerd[1460]: time="2025-01-29T11:55:30.150670880Z" level=info msg="CreateContainer within sandbox \"4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:55:30.166868 containerd[1460]: time="2025-01-29T11:55:30.166822414Z" level=info msg="CreateContainer within sandbox \"4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8fa16df3eacd505192f6453a3d131da78f0e5ca736e0c8ff29ed199ed2e27830\"" Jan 29 11:55:30.167849 containerd[1460]: time="2025-01-29T11:55:30.167715379Z" level=info msg="StartContainer for \"8fa16df3eacd505192f6453a3d131da78f0e5ca736e0c8ff29ed199ed2e27830\"" Jan 29 11:55:30.215092 systemd[1]: Started cri-containerd-8fa16df3eacd505192f6453a3d131da78f0e5ca736e0c8ff29ed199ed2e27830.scope - libcontainer container 8fa16df3eacd505192f6453a3d131da78f0e5ca736e0c8ff29ed199ed2e27830. Jan 29 11:55:30.479313 containerd[1460]: time="2025-01-29T11:55:30.479138814Z" level=info msg="StartContainer for \"8fa16df3eacd505192f6453a3d131da78f0e5ca736e0c8ff29ed199ed2e27830\" returns successfully" Jan 29 11:55:30.483130 kubelet[2494]: E0129 11:55:30.482710 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:30.494825 kubelet[2494]: I0129 11:55:30.492902 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7857f547f9-l6br2" podStartSLOduration=28.221942365 podStartE2EDuration="34.492879905s" podCreationTimestamp="2025-01-29 11:54:56 +0000 UTC" firstStartedPulling="2025-01-29 11:55:23.878272098 +0000 UTC m=+42.043949265" lastFinishedPulling="2025-01-29 11:55:30.149209638 +0000 UTC m=+48.314886805" observedRunningTime="2025-01-29 11:55:30.492742597 +0000 UTC m=+48.658419764" watchObservedRunningTime="2025-01-29 11:55:30.492879905 +0000 UTC m=+48.658557072" Jan 29 11:55:31.484327 kubelet[2494]: I0129 11:55:31.484268 2494 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:55:32.152390 containerd[1460]: time="2025-01-29T11:55:32.152279990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:32.186728 containerd[1460]: time="2025-01-29T11:55:32.186659519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:55:32.222362 containerd[1460]: time="2025-01-29T11:55:32.222321414Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:32.254476 containerd[1460]: time="2025-01-29T11:55:32.254425143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:32.255332 containerd[1460]: time="2025-01-29T11:55:32.255294103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.105867087s" Jan 29 11:55:32.255381 containerd[1460]: time="2025-01-29T11:55:32.255332535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:55:32.257498 containerd[1460]: time="2025-01-29T11:55:32.257176134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:55:32.259701 containerd[1460]: time="2025-01-29T11:55:32.259666216Z" level=info msg="CreateContainer within sandbox \"fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:55:32.287738 containerd[1460]: time="2025-01-29T11:55:32.287679100Z" level=info msg="CreateContainer within sandbox \"fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9e9374f5e48ef7c11c1ddd361517480da62e4c05e4bbfe136d2030b463353d2e\"" Jan 29 11:55:32.288260 containerd[1460]: time="2025-01-29T11:55:32.288236717Z" level=info msg="StartContainer for \"9e9374f5e48ef7c11c1ddd361517480da62e4c05e4bbfe136d2030b463353d2e\"" Jan 29 11:55:32.320932 systemd[1]: Started cri-containerd-9e9374f5e48ef7c11c1ddd361517480da62e4c05e4bbfe136d2030b463353d2e.scope - libcontainer container 9e9374f5e48ef7c11c1ddd361517480da62e4c05e4bbfe136d2030b463353d2e. Jan 29 11:55:32.399735 containerd[1460]: time="2025-01-29T11:55:32.399662090Z" level=info msg="StartContainer for \"9e9374f5e48ef7c11c1ddd361517480da62e4c05e4bbfe136d2030b463353d2e\" returns successfully" Jan 29 11:55:32.689731 containerd[1460]: time="2025-01-29T11:55:32.689637279Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:32.690527 containerd[1460]: time="2025-01-29T11:55:32.690471304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:55:32.693397 containerd[1460]: time="2025-01-29T11:55:32.693352850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 436.141249ms" Jan 29 11:55:32.693444 containerd[1460]: time="2025-01-29T11:55:32.693397203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:55:32.694729 containerd[1460]: time="2025-01-29T11:55:32.694701389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:55:32.696106 containerd[1460]: time="2025-01-29T11:55:32.696056742Z" level=info msg="CreateContainer within sandbox \"635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:55:32.712957 containerd[1460]: time="2025-01-29T11:55:32.712899821Z" level=info msg="CreateContainer within sandbox \"635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d84a8c594ea6846665e404e8b455b9fb716d29068c42ec48c2486e6a1c21d93\"" Jan 29 11:55:32.713524 containerd[1460]: time="2025-01-29T11:55:32.713479940Z" level=info msg="StartContainer for \"4d84a8c594ea6846665e404e8b455b9fb716d29068c42ec48c2486e6a1c21d93\"" Jan 29 11:55:32.742935 systemd[1]: Started cri-containerd-4d84a8c594ea6846665e404e8b455b9fb716d29068c42ec48c2486e6a1c21d93.scope - libcontainer container 4d84a8c594ea6846665e404e8b455b9fb716d29068c42ec48c2486e6a1c21d93. Jan 29 11:55:32.793744 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:43492.service - OpenSSH per-connection server daemon (10.0.0.1:43492). Jan 29 11:55:32.879399 containerd[1460]: time="2025-01-29T11:55:32.879318356Z" level=info msg="StartContainer for \"4d84a8c594ea6846665e404e8b455b9fb716d29068c42ec48c2486e6a1c21d93\" returns successfully" Jan 29 11:55:32.884774 sshd[4926]: Accepted publickey for core from 10.0.0.1 port 43492 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:32.887293 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:32.893635 systemd-logind[1438]: New session 12 of user core. Jan 29 11:55:32.901976 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:55:33.037063 sshd[4926]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:33.045875 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:43492.service: Deactivated successfully. Jan 29 11:55:33.047740 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:55:33.048818 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:55:33.057274 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:43500.service - OpenSSH per-connection server daemon (10.0.0.1:43500). Jan 29 11:55:33.060931 systemd-logind[1438]: Removed session 12. Jan 29 11:55:33.095015 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 43500 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:33.096693 sshd[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:33.104753 systemd-logind[1438]: New session 13 of user core. Jan 29 11:55:33.108945 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:55:33.263192 sshd[4946]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:33.277764 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:43500.service: Deactivated successfully. Jan 29 11:55:33.291053 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:55:33.295341 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:55:33.302216 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:43506.service - OpenSSH per-connection server daemon (10.0.0.1:43506). Jan 29 11:55:33.304731 systemd-logind[1438]: Removed session 13. Jan 29 11:55:33.338640 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 43506 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:33.340651 sshd[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:33.346840 systemd-logind[1438]: New session 14 of user core. Jan 29 11:55:33.357162 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:55:33.481250 sshd[4963]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:33.486749 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:43506.service: Deactivated successfully. Jan 29 11:55:33.490832 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:55:33.493116 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:55:33.494773 systemd-logind[1438]: Removed session 14. Jan 29 11:55:34.495555 kubelet[2494]: I0129 11:55:34.495492 2494 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:55:34.752696 containerd[1460]: time="2025-01-29T11:55:34.752538753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:34.753539 containerd[1460]: time="2025-01-29T11:55:34.753434612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:55:34.754687 containerd[1460]: time="2025-01-29T11:55:34.754641577Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:34.756807 containerd[1460]: time="2025-01-29T11:55:34.756746015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:34.757378 containerd[1460]: time="2025-01-29T11:55:34.757342383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.062606389s" Jan 29 11:55:34.757378 containerd[1460]: time="2025-01-29T11:55:34.757373352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:55:34.759913 containerd[1460]: time="2025-01-29T11:55:34.759870927Z" level=info msg="CreateContainer within sandbox \"fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:55:34.775505 containerd[1460]: time="2025-01-29T11:55:34.775458469Z" level=info msg="CreateContainer within sandbox \"fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4ba28e09108e276e76e205484b1d2df2644a1f1502d7060759eadd540d077614\"" Jan 29 11:55:34.776010 containerd[1460]: time="2025-01-29T11:55:34.775928230Z" level=info msg="StartContainer for \"4ba28e09108e276e76e205484b1d2df2644a1f1502d7060759eadd540d077614\"" Jan 29 11:55:34.819126 systemd[1]: Started cri-containerd-4ba28e09108e276e76e205484b1d2df2644a1f1502d7060759eadd540d077614.scope - libcontainer container 4ba28e09108e276e76e205484b1d2df2644a1f1502d7060759eadd540d077614. Jan 29 11:55:34.858322 containerd[1460]: time="2025-01-29T11:55:34.858036211Z" level=info msg="StartContainer for \"4ba28e09108e276e76e205484b1d2df2644a1f1502d7060759eadd540d077614\" returns successfully" Jan 29 11:55:35.004154 kubelet[2494]: I0129 11:55:35.004020 2494 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:55:35.004154 kubelet[2494]: I0129 11:55:35.004075 2494 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:55:35.514776 kubelet[2494]: I0129 11:55:35.514686 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rqqp5" podStartSLOduration=30.138211685 podStartE2EDuration="39.514664124s" podCreationTimestamp="2025-01-29 11:54:56 +0000 UTC" firstStartedPulling="2025-01-29 11:55:25.38187369 +0000 UTC m=+43.547550857" lastFinishedPulling="2025-01-29 11:55:34.758326129 +0000 UTC m=+52.924003296" observedRunningTime="2025-01-29 11:55:35.513957899 +0000 UTC m=+53.679635066" watchObservedRunningTime="2025-01-29 11:55:35.514664124 +0000 UTC m=+53.680341301" Jan 29 11:55:35.515414 kubelet[2494]: I0129 11:55:35.514897 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7857f547f9-cj9n8" podStartSLOduration=33.115023076 podStartE2EDuration="39.514887914s" podCreationTimestamp="2025-01-29 11:54:56 +0000 UTC" firstStartedPulling="2025-01-29 11:55:26.294582655 +0000 UTC m=+44.460259822" lastFinishedPulling="2025-01-29 11:55:32.694447493 +0000 UTC m=+50.860124660" observedRunningTime="2025-01-29 11:55:33.503190641 +0000 UTC m=+51.668867808" watchObservedRunningTime="2025-01-29 11:55:35.514887914 +0000 UTC m=+53.680565081" Jan 29 11:55:38.495549 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:43522.service - OpenSSH per-connection server daemon (10.0.0.1:43522). Jan 29 11:55:38.546322 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 43522 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:38.548611 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:38.553497 systemd-logind[1438]: New session 15 of user core. Jan 29 11:55:38.567014 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:55:38.703225 sshd[5030]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:38.709110 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:43522.service: Deactivated successfully. Jan 29 11:55:38.711437 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:55:38.712248 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:55:38.713566 systemd-logind[1438]: Removed session 15. Jan 29 11:55:39.266587 kubelet[2494]: I0129 11:55:39.266534 2494 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:55:39.379395 kubelet[2494]: I0129 11:55:39.379315 2494 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:55:41.919018 containerd[1460]: time="2025-01-29T11:55:41.918962672Z" level=info msg="StopPodSandbox for \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\"" Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.955 [WARNING][5062] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0", GenerateName:"calico-apiserver-7857f547f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd0ce080-79db-4e06-87fe-bc35e2d0e23b", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7857f547f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0", Pod:"calico-apiserver-7857f547f9-l6br2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfc2f320acc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.955 [INFO][5062] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.955 [INFO][5062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" iface="eth0" netns="" Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.956 [INFO][5062] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.956 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.981 [INFO][5071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.981 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.981 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.987 [WARNING][5071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.987 [INFO][5071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.988 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:41.994390 containerd[1460]: 2025-01-29 11:55:41.990 [INFO][5062] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:41.994390 containerd[1460]: time="2025-01-29T11:55:41.994206188Z" level=info msg="TearDown network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\" successfully" Jan 29 11:55:41.994390 containerd[1460]: time="2025-01-29T11:55:41.994238190Z" level=info msg="StopPodSandbox for \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\" returns successfully" Jan 29 11:55:42.007270 containerd[1460]: time="2025-01-29T11:55:42.007194174Z" level=info msg="RemovePodSandbox for \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\"" Jan 29 11:55:42.014619 containerd[1460]: time="2025-01-29T11:55:42.014539404Z" level=info msg="Forcibly stopping sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\"" Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.069 [WARNING][5093] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0", GenerateName:"calico-apiserver-7857f547f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd0ce080-79db-4e06-87fe-bc35e2d0e23b", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7857f547f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dc6df023b9f167b31a3069d70b3a51c0780561992df58b122294d6e71fb8bb0", Pod:"calico-apiserver-7857f547f9-l6br2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfc2f320acc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.069 [INFO][5093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.069 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" iface="eth0" netns="" Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.069 [INFO][5093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.069 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.090 [INFO][5101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.090 [INFO][5101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.090 [INFO][5101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.095 [WARNING][5101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.095 [INFO][5101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" HandleID="k8s-pod-network.2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Workload="localhost-k8s-calico--apiserver--7857f547f9--l6br2-eth0" Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.096 [INFO][5101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.101426 containerd[1460]: 2025-01-29 11:55:42.098 [INFO][5093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045" Jan 29 11:55:42.102037 containerd[1460]: time="2025-01-29T11:55:42.101477911Z" level=info msg="TearDown network for sandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\" successfully" Jan 29 11:55:42.122551 containerd[1460]: time="2025-01-29T11:55:42.122479273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:55:42.122704 containerd[1460]: time="2025-01-29T11:55:42.122616307Z" level=info msg="RemovePodSandbox \"2b701c0e99bfa56b56b17508d606a3f81a31bdb17ebcdde150085270ed469045\" returns successfully" Jan 29 11:55:42.123521 containerd[1460]: time="2025-01-29T11:55:42.123486645Z" level=info msg="StopPodSandbox for \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\"" Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.171 [WARNING][5124] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--q88sz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cdf9998-9f22-47fa-be10-e527ac360095", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8", Pod:"coredns-668d6bf9bc-q88sz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b82823d164", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.171 [INFO][5124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.171 [INFO][5124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" iface="eth0" netns="" Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.171 [INFO][5124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.171 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.200 [INFO][5132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.200 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.200 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.206 [WARNING][5132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.206 [INFO][5132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.208 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.213150 containerd[1460]: 2025-01-29 11:55:42.210 [INFO][5124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:42.213150 containerd[1460]: time="2025-01-29T11:55:42.213086702Z" level=info msg="TearDown network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\" successfully" Jan 29 11:55:42.213150 containerd[1460]: time="2025-01-29T11:55:42.213114697Z" level=info msg="StopPodSandbox for \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\" returns successfully" Jan 29 11:55:42.214406 containerd[1460]: time="2025-01-29T11:55:42.213879190Z" level=info msg="RemovePodSandbox for \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\"" Jan 29 11:55:42.214406 containerd[1460]: time="2025-01-29T11:55:42.213925510Z" level=info msg="Forcibly stopping sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\"" Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.250 [WARNING][5155] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--q88sz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cdf9998-9f22-47fa-be10-e527ac360095", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"699df5900b0c2dc6a1b34f53e19b99c52796afa66496aa535b113064bda89df8", Pod:"coredns-668d6bf9bc-q88sz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b82823d164", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.250 [INFO][5155] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.250 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" iface="eth0" netns="" Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.250 [INFO][5155] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.250 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.271 [INFO][5162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.272 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.272 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.277 [WARNING][5162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.277 [INFO][5162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" HandleID="k8s-pod-network.1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Workload="localhost-k8s-coredns--668d6bf9bc--q88sz-eth0" Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.278 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.283520 containerd[1460]: 2025-01-29 11:55:42.281 [INFO][5155] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9" Jan 29 11:55:42.283979 containerd[1460]: time="2025-01-29T11:55:42.283573977Z" level=info msg="TearDown network for sandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\" successfully" Jan 29 11:55:42.288843 containerd[1460]: time="2025-01-29T11:55:42.288765384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:55:42.289008 containerd[1460]: time="2025-01-29T11:55:42.288863132Z" level=info msg="RemovePodSandbox \"1966bc0a86cad6bafdeeb313f1cdff0cd36f7aa4e50cb37d87fba071342d0ff9\" returns successfully" Jan 29 11:55:42.289471 containerd[1460]: time="2025-01-29T11:55:42.289432941Z" level=info msg="StopPodSandbox for \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\"" Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.325 [WARNING][5184] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0", GenerateName:"calico-kube-controllers-d77bcc79-", Namespace:"calico-system", SelfLink:"", UID:"83ff0ee0-50a2-4a27-851e-d262c1a81765", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d77bcc79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e", Pod:"calico-kube-controllers-d77bcc79-l7ddq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9b2952a2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.325 [INFO][5184] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.325 [INFO][5184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" iface="eth0" netns="" Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.325 [INFO][5184] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.325 [INFO][5184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.346 [INFO][5192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.346 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.346 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.352 [WARNING][5192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.352 [INFO][5192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.354 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.359180 containerd[1460]: 2025-01-29 11:55:42.356 [INFO][5184] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:42.359863 containerd[1460]: time="2025-01-29T11:55:42.359220526Z" level=info msg="TearDown network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\" successfully" Jan 29 11:55:42.359863 containerd[1460]: time="2025-01-29T11:55:42.359255765Z" level=info msg="StopPodSandbox for \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\" returns successfully" Jan 29 11:55:42.359984 containerd[1460]: time="2025-01-29T11:55:42.359886581Z" level=info msg="RemovePodSandbox for \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\"" Jan 29 11:55:42.360025 containerd[1460]: time="2025-01-29T11:55:42.359996112Z" level=info msg="Forcibly stopping sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\"" Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.398 [WARNING][5215] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0", GenerateName:"calico-kube-controllers-d77bcc79-", Namespace:"calico-system", SelfLink:"", UID:"83ff0ee0-50a2-4a27-851e-d262c1a81765", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d77bcc79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e", Pod:"calico-kube-controllers-d77bcc79-l7ddq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9b2952a2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.398 [INFO][5215] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.398 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" iface="eth0" netns="" Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.398 [INFO][5215] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.398 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.419 [INFO][5223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.419 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.419 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.425 [WARNING][5223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.425 [INFO][5223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" HandleID="k8s-pod-network.78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.427 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.431858 containerd[1460]: 2025-01-29 11:55:42.429 [INFO][5215] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4" Jan 29 11:55:42.432358 containerd[1460]: time="2025-01-29T11:55:42.431909456Z" level=info msg="TearDown network for sandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\" successfully" Jan 29 11:55:42.436501 containerd[1460]: time="2025-01-29T11:55:42.436407096Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:55:42.436501 containerd[1460]: time="2025-01-29T11:55:42.436502209Z" level=info msg="RemovePodSandbox \"78e8d212964ebf67b3709a2152487f27fc7759bfe3943aa88795d7b6eaacb5e4\" returns successfully" Jan 29 11:55:42.437186 containerd[1460]: time="2025-01-29T11:55:42.437157662Z" level=info msg="StopPodSandbox for \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\"" Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.477 [WARNING][5245] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0", GenerateName:"calico-apiserver-7857f547f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"130669b1-1d96-4e3f-83e0-176296743cad", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7857f547f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858", Pod:"calico-apiserver-7857f547f9-cj9n8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae0286e260f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.477 [INFO][5245] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.477 [INFO][5245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" iface="eth0" netns="" Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.477 [INFO][5245] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.477 [INFO][5245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.497 [INFO][5253] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.498 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.498 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.503 [WARNING][5253] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.503 [INFO][5253] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.505 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.510075 containerd[1460]: 2025-01-29 11:55:42.507 [INFO][5245] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:42.510628 containerd[1460]: time="2025-01-29T11:55:42.510161558Z" level=info msg="TearDown network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\" successfully" Jan 29 11:55:42.510628 containerd[1460]: time="2025-01-29T11:55:42.510198619Z" level=info msg="StopPodSandbox for \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\" returns successfully" Jan 29 11:55:42.510958 containerd[1460]: time="2025-01-29T11:55:42.510903207Z" level=info msg="RemovePodSandbox for \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\"" Jan 29 11:55:42.510958 containerd[1460]: time="2025-01-29T11:55:42.510954186Z" level=info msg="Forcibly stopping sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\"" Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.546 [WARNING][5275] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0", GenerateName:"calico-apiserver-7857f547f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"130669b1-1d96-4e3f-83e0-176296743cad", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7857f547f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"635edecb2356fb0e1232e4667afb3883f7db07ff6b8881afa445abe87e216858", Pod:"calico-apiserver-7857f547f9-cj9n8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae0286e260f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.547 [INFO][5275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.547 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" iface="eth0" netns="" Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.547 [INFO][5275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.547 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.569 [INFO][5282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.569 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.569 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.574 [WARNING][5282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.575 [INFO][5282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" HandleID="k8s-pod-network.43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Workload="localhost-k8s-calico--apiserver--7857f547f9--cj9n8-eth0" Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.576 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.581485 containerd[1460]: 2025-01-29 11:55:42.579 [INFO][5275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112" Jan 29 11:55:42.581964 containerd[1460]: time="2025-01-29T11:55:42.581540101Z" level=info msg="TearDown network for sandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\" successfully" Jan 29 11:55:42.599251 containerd[1460]: time="2025-01-29T11:55:42.599179012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:55:42.599251 containerd[1460]: time="2025-01-29T11:55:42.599253295Z" level=info msg="RemovePodSandbox \"43215ebc71c0552b46d1db35e646a5dba391c3255d5d8e08849dd8dd9f99d112\" returns successfully" Jan 29 11:55:42.600017 containerd[1460]: time="2025-01-29T11:55:42.599973043Z" level=info msg="StopPodSandbox for \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\"" Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.637 [WARNING][5305] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rqqp5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3601942d-e4d5-4f58-9091-3f7871be8fee", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834", Pod:"csi-node-driver-rqqp5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5c81f92406", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.638 [INFO][5305] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.638 [INFO][5305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" iface="eth0" netns="" Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.638 [INFO][5305] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.638 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.659 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.659 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.659 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.665 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.665 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.667 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.672200 containerd[1460]: 2025-01-29 11:55:42.669 [INFO][5305] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:42.672873 containerd[1460]: time="2025-01-29T11:55:42.672237062Z" level=info msg="TearDown network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\" successfully" Jan 29 11:55:42.672873 containerd[1460]: time="2025-01-29T11:55:42.672270878Z" level=info msg="StopPodSandbox for \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\" returns successfully" Jan 29 11:55:42.672946 containerd[1460]: time="2025-01-29T11:55:42.672882517Z" level=info msg="RemovePodSandbox for \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\"" Jan 29 11:55:42.672946 containerd[1460]: time="2025-01-29T11:55:42.672915510Z" level=info msg="Forcibly stopping sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\"" Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.712 [WARNING][5335] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rqqp5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3601942d-e4d5-4f58-9091-3f7871be8fee", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fedbd6bb9e0ecafa81997c668d541daba83a02e260659c19cc61f098fb775834", Pod:"csi-node-driver-rqqp5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5c81f92406", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.714 [INFO][5335] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.714 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" iface="eth0" netns="" Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.714 [INFO][5335] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.714 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.733 [INFO][5343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.733 [INFO][5343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.733 [INFO][5343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.739 [WARNING][5343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.739 [INFO][5343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" HandleID="k8s-pod-network.ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Workload="localhost-k8s-csi--node--driver--rqqp5-eth0" Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.741 [INFO][5343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.745984 containerd[1460]: 2025-01-29 11:55:42.743 [INFO][5335] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4" Jan 29 11:55:42.746503 containerd[1460]: time="2025-01-29T11:55:42.746034327Z" level=info msg="TearDown network for sandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\" successfully" Jan 29 11:55:42.750743 containerd[1460]: time="2025-01-29T11:55:42.750662499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:55:42.750743 containerd[1460]: time="2025-01-29T11:55:42.750743615Z" level=info msg="RemovePodSandbox \"ca6ce0d56756a6616b9ba8cb116e9ff23950a7851ff92a0b47d60590ed0fb9d4\" returns successfully" Jan 29 11:55:42.751392 containerd[1460]: time="2025-01-29T11:55:42.751349243Z" level=info msg="StopPodSandbox for \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\"" Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.793 [WARNING][5366] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"79fca267-9a26-4684-b71e-b7f100ade442", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6", Pod:"coredns-668d6bf9bc-sjsdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d833553147", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.793 [INFO][5366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.793 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" iface="eth0" netns="" Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.793 [INFO][5366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.793 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.818 [INFO][5374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.818 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.818 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.824 [WARNING][5374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.824 [INFO][5374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.825 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.831073 containerd[1460]: 2025-01-29 11:55:42.828 [INFO][5366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:42.831073 containerd[1460]: time="2025-01-29T11:55:42.831008998Z" level=info msg="TearDown network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\" successfully" Jan 29 11:55:42.831073 containerd[1460]: time="2025-01-29T11:55:42.831037594Z" level=info msg="StopPodSandbox for \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\" returns successfully" Jan 29 11:55:42.831709 containerd[1460]: time="2025-01-29T11:55:42.831658691Z" level=info msg="RemovePodSandbox for \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\"" Jan 29 11:55:42.831709 containerd[1460]: time="2025-01-29T11:55:42.831684481Z" level=info msg="Forcibly stopping sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\"" Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.873 [WARNING][5396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"79fca267-9a26-4684-b71e-b7f100ade442", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 54, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbbf5aa42cc32dd80c74902c6086e77f1fd464ca421e44b5e8213398b63c82b6", Pod:"coredns-668d6bf9bc-sjsdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d833553147", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.874 [INFO][5396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.874 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" iface="eth0" netns="" Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.874 [INFO][5396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.874 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.895 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.895 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.895 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.901 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.901 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" HandleID="k8s-pod-network.12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Workload="localhost-k8s-coredns--668d6bf9bc--sjsdr-eth0" Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.903 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:42.908210 containerd[1460]: 2025-01-29 11:55:42.905 [INFO][5396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8" Jan 29 11:55:42.908677 containerd[1460]: time="2025-01-29T11:55:42.908261784Z" level=info msg="TearDown network for sandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\" successfully" Jan 29 11:55:42.912353 containerd[1460]: time="2025-01-29T11:55:42.912318835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:55:42.912394 containerd[1460]: time="2025-01-29T11:55:42.912373661Z" level=info msg="RemovePodSandbox \"12d77fdcf63a89d33130f873125c78e6456050df86149d0cbc68539da2996be8\" returns successfully" Jan 29 11:55:43.716307 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:58470.service - OpenSSH per-connection server daemon (10.0.0.1:58470). Jan 29 11:55:43.774236 sshd[5419]: Accepted publickey for core from 10.0.0.1 port 58470 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:43.776459 sshd[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:43.781445 systemd-logind[1438]: New session 16 of user core. Jan 29 11:55:43.789070 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:55:43.916457 sshd[5419]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:43.921522 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:58470.service: Deactivated successfully. Jan 29 11:55:43.924598 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:55:43.925426 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:55:43.926524 systemd-logind[1438]: Removed session 16. Jan 29 11:55:48.928838 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:58478.service - OpenSSH per-connection server daemon (10.0.0.1:58478). Jan 29 11:55:48.969730 sshd[5438]: Accepted publickey for core from 10.0.0.1 port 58478 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:48.971636 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:48.976196 systemd-logind[1438]: New session 17 of user core. Jan 29 11:55:48.982938 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:55:49.110335 sshd[5438]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:49.115691 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:58478.service: Deactivated successfully. Jan 29 11:55:49.119028 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:55:49.119973 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:55:49.121185 systemd-logind[1438]: Removed session 17. Jan 29 11:55:51.181190 systemd[1]: run-containerd-runc-k8s.io-babac0e9cc6682eba186e67545fb65c1ba5ee5aaaef85e437cffbe48c90c5b8e-runc.3QoaGF.mount: Deactivated successfully. Jan 29 11:55:51.236443 kubelet[2494]: E0129 11:55:51.236403 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:52.926463 kubelet[2494]: E0129 11:55:52.926403 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:54.124584 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:50898.service - OpenSSH per-connection server daemon (10.0.0.1:50898). Jan 29 11:55:54.183615 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 50898 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:54.185710 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:54.190780 systemd-logind[1438]: New session 18 of user core. Jan 29 11:55:54.199059 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:55:54.330132 sshd[5474]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:54.335915 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:50898.service: Deactivated successfully. Jan 29 11:55:54.338206 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:55:54.338974 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:55:54.340053 systemd-logind[1438]: Removed session 18. Jan 29 11:55:56.804252 systemd[1]: run-containerd-runc-k8s.io-19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0-runc.fSFwKp.mount: Deactivated successfully. Jan 29 11:55:56.893835 containerd[1460]: time="2025-01-29T11:55:56.893665494Z" level=info msg="StopContainer for \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\" with timeout 300 (s)" Jan 29 11:55:56.897637 containerd[1460]: time="2025-01-29T11:55:56.897585074Z" level=info msg="Stop container \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\" with signal terminated" Jan 29 11:55:56.966742 containerd[1460]: time="2025-01-29T11:55:56.966661002Z" level=info msg="StopContainer for \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\" with timeout 30 (s)" Jan 29 11:55:56.967308 containerd[1460]: time="2025-01-29T11:55:56.967284113Z" level=info msg="Stop container \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\" with signal terminated" Jan 29 11:55:56.985907 systemd[1]: cri-containerd-19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0.scope: Deactivated successfully. Jan 29 11:55:57.019814 containerd[1460]: time="2025-01-29T11:55:57.019521177Z" level=info msg="shim disconnected" id=19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0 namespace=k8s.io Jan 29 11:55:57.019814 containerd[1460]: time="2025-01-29T11:55:57.019698215Z" level=warning msg="cleaning up after shim disconnected" id=19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0 namespace=k8s.io Jan 29 11:55:57.019814 containerd[1460]: time="2025-01-29T11:55:57.019711670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:55:57.023178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0-rootfs.mount: Deactivated successfully. Jan 29 11:55:57.143198 containerd[1460]: time="2025-01-29T11:55:57.143062267Z" level=info msg="StopContainer for \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\" returns successfully" Jan 29 11:55:57.143636 containerd[1460]: time="2025-01-29T11:55:57.143612269Z" level=info msg="StopPodSandbox for \"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e\"" Jan 29 11:55:57.143716 containerd[1460]: time="2025-01-29T11:55:57.143648387Z" level=info msg="Container to stop \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:55:57.147120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e-shm.mount: Deactivated successfully. Jan 29 11:55:57.151002 systemd[1]: cri-containerd-b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e.scope: Deactivated successfully. Jan 29 11:55:57.171970 containerd[1460]: time="2025-01-29T11:55:57.171707098Z" level=info msg="shim disconnected" id=b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e namespace=k8s.io Jan 29 11:55:57.171970 containerd[1460]: time="2025-01-29T11:55:57.171782533Z" level=warning msg="cleaning up after shim disconnected" id=b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e namespace=k8s.io Jan 29 11:55:57.171970 containerd[1460]: time="2025-01-29T11:55:57.171817920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:55:57.174901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e-rootfs.mount: Deactivated successfully. Jan 29 11:55:57.299222 systemd-networkd[1376]: calid9b2952a2c6: Link DOWN Jan 29 11:55:57.299812 systemd-networkd[1376]: calid9b2952a2c6: Lost carrier Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.297 [INFO][5599] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.297 [INFO][5599] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" iface="eth0" netns="/var/run/netns/cni-9ba8531c-61cf-db22-521e-601fdc874c9c" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.297 [INFO][5599] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" iface="eth0" netns="/var/run/netns/cni-9ba8531c-61cf-db22-521e-601fdc874c9c" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.314 [INFO][5599] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" after=16.511105ms iface="eth0" netns="/var/run/netns/cni-9ba8531c-61cf-db22-521e-601fdc874c9c" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.314 [INFO][5599] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.314 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.335 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" HandleID="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.336 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.336 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.371 [INFO][5613] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" HandleID="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.371 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" HandleID="k8s-pod-network.b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Workload="localhost-k8s-calico--kube--controllers--d77bcc79--l7ddq-eth0" Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.372 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:57.379782 containerd[1460]: 2025-01-29 11:55:57.376 [INFO][5599] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e" Jan 29 11:55:57.381065 containerd[1460]: time="2025-01-29T11:55:57.380372946Z" level=info msg="TearDown network for sandbox \"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e\" successfully" Jan 29 11:55:57.381065 containerd[1460]: time="2025-01-29T11:55:57.380409516Z" level=info msg="StopPodSandbox for \"b89b75bc3771c26252a70f143df8cd8a9838c61f111d12d53ecb964065cb838e\" returns successfully" Jan 29 11:55:57.441153 kubelet[2494]: I0129 11:55:57.440982 2494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83ff0ee0-50a2-4a27-851e-d262c1a81765-tigera-ca-bundle\") pod \"83ff0ee0-50a2-4a27-851e-d262c1a81765\" (UID: \"83ff0ee0-50a2-4a27-851e-d262c1a81765\") " Jan 29 11:55:57.441153 kubelet[2494]: I0129 11:55:57.441032 2494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8l4m\" (UniqueName: \"kubernetes.io/projected/83ff0ee0-50a2-4a27-851e-d262c1a81765-kube-api-access-p8l4m\") pod \"83ff0ee0-50a2-4a27-851e-d262c1a81765\" (UID: \"83ff0ee0-50a2-4a27-851e-d262c1a81765\") " Jan 29 11:55:57.445151 kubelet[2494]: I0129 11:55:57.445092 2494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ff0ee0-50a2-4a27-851e-d262c1a81765-kube-api-access-p8l4m" (OuterVolumeSpecName: "kube-api-access-p8l4m") pod "83ff0ee0-50a2-4a27-851e-d262c1a81765" (UID: "83ff0ee0-50a2-4a27-851e-d262c1a81765"). InnerVolumeSpecName "kube-api-access-p8l4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:55:57.446929 kubelet[2494]: I0129 11:55:57.446877 2494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ff0ee0-50a2-4a27-851e-d262c1a81765-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "83ff0ee0-50a2-4a27-851e-d262c1a81765" (UID: "83ff0ee0-50a2-4a27-851e-d262c1a81765"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:55:57.541997 kubelet[2494]: I0129 11:55:57.541915 2494 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83ff0ee0-50a2-4a27-851e-d262c1a81765-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 11:55:57.541997 kubelet[2494]: I0129 11:55:57.541969 2494 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8l4m\" (UniqueName: \"kubernetes.io/projected/83ff0ee0-50a2-4a27-851e-d262c1a81765-kube-api-access-p8l4m\") on node \"localhost\" DevicePath \"\"" Jan 29 11:55:57.558251 kubelet[2494]: I0129 11:55:57.558134 2494 scope.go:117] "RemoveContainer" containerID="19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0" Jan 29 11:55:57.560209 containerd[1460]: time="2025-01-29T11:55:57.560147232Z" level=info msg="RemoveContainer for \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\"" Jan 29 11:55:57.565278 systemd[1]: Removed slice kubepods-besteffort-pod83ff0ee0_50a2_4a27_851e_d262c1a81765.slice - libcontainer container kubepods-besteffort-pod83ff0ee0_50a2_4a27_851e_d262c1a81765.slice. Jan 29 11:55:57.566376 containerd[1460]: time="2025-01-29T11:55:57.566324099Z" level=info msg="RemoveContainer for \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\" returns successfully" Jan 29 11:55:57.566655 kubelet[2494]: I0129 11:55:57.566617 2494 scope.go:117] "RemoveContainer" containerID="19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0" Jan 29 11:55:57.584271 containerd[1460]: time="2025-01-29T11:55:57.575441714Z" level=error msg="ContainerStatus for \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\": not found" Jan 29 11:55:57.584498 kubelet[2494]: E0129 11:55:57.584469 2494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\": not found" containerID="19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0" Jan 29 11:55:57.584576 kubelet[2494]: I0129 11:55:57.584511 2494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0"} err="failed to get container status \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"19596aab03f2255ec16de069e64bd2962bdd262b0dd58f53adafdbe7437fd5c0\": not found" Jan 29 11:55:57.600161 kubelet[2494]: I0129 11:55:57.600098 2494 memory_manager.go:355] "RemoveStaleState removing state" podUID="83ff0ee0-50a2-4a27-851e-d262c1a81765" containerName="calico-kube-controllers" Jan 29 11:55:57.611246 systemd[1]: Created slice kubepods-besteffort-poddd0c11fa_f306_4e84_b048_22481e441728.slice - libcontainer container kubepods-besteffort-poddd0c11fa_f306_4e84_b048_22481e441728.slice. Jan 29 11:55:57.642547 kubelet[2494]: I0129 11:55:57.642458 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd0c11fa-f306-4e84-b048-22481e441728-tigera-ca-bundle\") pod \"calico-kube-controllers-6fd9f647b-dlc54\" (UID: \"dd0c11fa-f306-4e84-b048-22481e441728\") " pod="calico-system/calico-kube-controllers-6fd9f647b-dlc54" Jan 29 11:55:57.642547 kubelet[2494]: I0129 11:55:57.642541 2494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj42m\" (UniqueName: \"kubernetes.io/projected/dd0c11fa-f306-4e84-b048-22481e441728-kube-api-access-vj42m\") pod \"calico-kube-controllers-6fd9f647b-dlc54\" (UID: \"dd0c11fa-f306-4e84-b048-22481e441728\") " pod="calico-system/calico-kube-controllers-6fd9f647b-dlc54" Jan 29 11:55:57.798809 systemd[1]: var-lib-kubelet-pods-83ff0ee0\x2d50a2\x2d4a27\x2d851e\x2dd262c1a81765-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 29 11:55:57.798977 systemd[1]: run-netns-cni\x2d9ba8531c\x2d61cf\x2ddb22\x2d521e\x2d601fdc874c9c.mount: Deactivated successfully. Jan 29 11:55:57.799080 systemd[1]: var-lib-kubelet-pods-83ff0ee0\x2d50a2\x2d4a27\x2d851e\x2dd262c1a81765-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp8l4m.mount: Deactivated successfully. Jan 29 11:55:57.916736 containerd[1460]: time="2025-01-29T11:55:57.916661296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd9f647b-dlc54,Uid:dd0c11fa-f306-4e84-b048-22481e441728,Namespace:calico-system,Attempt:0,}" Jan 29 11:55:57.931812 kubelet[2494]: I0129 11:55:57.931098 2494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ff0ee0-50a2-4a27-851e-d262c1a81765" path="/var/lib/kubelet/pods/83ff0ee0-50a2-4a27-851e-d262c1a81765/volumes" Jan 29 11:55:58.031101 systemd-networkd[1376]: cali5b1b1fcee45: Link UP Jan 29 11:55:58.031325 systemd-networkd[1376]: cali5b1b1fcee45: Gained carrier Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:57.963 [INFO][5625] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0 calico-kube-controllers-6fd9f647b- calico-system dd0c11fa-f306-4e84-b048-22481e441728 1229 0 2025-01-29 11:55:57 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fd9f647b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6fd9f647b-dlc54 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5b1b1fcee45 [] []}} ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Namespace="calico-system" Pod="calico-kube-controllers-6fd9f647b-dlc54" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:57.963 [INFO][5625] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Namespace="calico-system" Pod="calico-kube-controllers-6fd9f647b-dlc54" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:57.993 [INFO][5638] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" HandleID="k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Workload="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.001 [INFO][5638] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" HandleID="k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Workload="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6fd9f647b-dlc54", "timestamp":"2025-01-29 11:55:57.99309847 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.001 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.001 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.001 [INFO][5638] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.003 [INFO][5638] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.006 [INFO][5638] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.011 [INFO][5638] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.012 [INFO][5638] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.014 [INFO][5638] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.014 [INFO][5638] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.016 [INFO][5638] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7 Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.020 [INFO][5638] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.025 [INFO][5638] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.025 [INFO][5638] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" host="localhost" Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.025 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:55:58.045890 containerd[1460]: 2025-01-29 11:55:58.025 [INFO][5638] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" HandleID="k8s-pod-network.f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Workload="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" Jan 29 11:55:58.046520 containerd[1460]: 2025-01-29 11:55:58.028 [INFO][5625] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Namespace="calico-system" Pod="calico-kube-controllers-6fd9f647b-dlc54" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0", GenerateName:"calico-kube-controllers-6fd9f647b-", Namespace:"calico-system", SelfLink:"", UID:"dd0c11fa-f306-4e84-b048-22481e441728", ResourceVersion:"1229", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 55, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd9f647b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6fd9f647b-dlc54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b1b1fcee45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:58.046520 containerd[1460]: 2025-01-29 11:55:58.028 [INFO][5625] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Namespace="calico-system" Pod="calico-kube-controllers-6fd9f647b-dlc54" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" Jan 29 11:55:58.046520 containerd[1460]: 2025-01-29 11:55:58.028 [INFO][5625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b1b1fcee45 ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Namespace="calico-system" Pod="calico-kube-controllers-6fd9f647b-dlc54" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" Jan 29 11:55:58.046520 containerd[1460]: 2025-01-29 11:55:58.031 [INFO][5625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Namespace="calico-system" Pod="calico-kube-controllers-6fd9f647b-dlc54" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" Jan 29 11:55:58.046520 containerd[1460]: 2025-01-29 11:55:58.032 [INFO][5625] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Namespace="calico-system" Pod="calico-kube-controllers-6fd9f647b-dlc54" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0", GenerateName:"calico-kube-controllers-6fd9f647b-", Namespace:"calico-system", SelfLink:"", UID:"dd0c11fa-f306-4e84-b048-22481e441728", ResourceVersion:"1229", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 55, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd9f647b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7", Pod:"calico-kube-controllers-6fd9f647b-dlc54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b1b1fcee45", MAC:"1e:20:8f:d4:21:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:55:58.046520 containerd[1460]: 2025-01-29 11:55:58.042 [INFO][5625] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7" Namespace="calico-system" Pod="calico-kube-controllers-6fd9f647b-dlc54" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd9f647b--dlc54-eth0" Jan 29 11:55:58.074023 containerd[1460]: time="2025-01-29T11:55:58.072455435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:58.074023 containerd[1460]: time="2025-01-29T11:55:58.072527934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:58.074023 containerd[1460]: time="2025-01-29T11:55:58.072594371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:58.074023 containerd[1460]: time="2025-01-29T11:55:58.072698519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:58.104997 systemd[1]: Started cri-containerd-f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7.scope - libcontainer container f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7. Jan 29 11:55:58.118706 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:58.144413 containerd[1460]: time="2025-01-29T11:55:58.144364295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd9f647b-dlc54,Uid:dd0c11fa-f306-4e84-b048-22481e441728,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7\"" Jan 29 11:55:58.153598 containerd[1460]: time="2025-01-29T11:55:58.153547206Z" level=info msg="CreateContainer within sandbox \"f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:55:58.171005 containerd[1460]: time="2025-01-29T11:55:58.170934068Z" level=info msg="CreateContainer within sandbox \"f2d43c1c95e2a28692e96964ce64b6277082411036bf8f83083e863aabbc41f7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"14d32fe6dee060b9c3455f556925af8ecf4e14a0f453be8f3cf3a0abc6c8d5d2\"" Jan 29 11:55:58.171669 containerd[1460]: time="2025-01-29T11:55:58.171618114Z" level=info msg="StartContainer for \"14d32fe6dee060b9c3455f556925af8ecf4e14a0f453be8f3cf3a0abc6c8d5d2\"" Jan 29 11:55:58.204960 systemd[1]: Started cri-containerd-14d32fe6dee060b9c3455f556925af8ecf4e14a0f453be8f3cf3a0abc6c8d5d2.scope - libcontainer container 14d32fe6dee060b9c3455f556925af8ecf4e14a0f453be8f3cf3a0abc6c8d5d2. Jan 29 11:55:58.253872 containerd[1460]: time="2025-01-29T11:55:58.253051798Z" level=info msg="StartContainer for \"14d32fe6dee060b9c3455f556925af8ecf4e14a0f453be8f3cf3a0abc6c8d5d2\" returns successfully" Jan 29 11:55:58.576085 kubelet[2494]: I0129 11:55:58.576001 2494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fd9f647b-dlc54" podStartSLOduration=1.5759700030000001 podStartE2EDuration="1.575970003s" podCreationTimestamp="2025-01-29 11:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:55:58.574748591 +0000 UTC m=+76.740425758" watchObservedRunningTime="2025-01-29 11:55:58.575970003 +0000 UTC m=+76.741647190" Jan 29 11:55:59.349556 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:50908.service - OpenSSH per-connection server daemon (10.0.0.1:50908). Jan 29 11:55:59.395451 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 50908 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:59.397562 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:59.402277 systemd-logind[1438]: New session 19 of user core. Jan 29 11:55:59.412929 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:55:59.538804 sshd[5768]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:59.550450 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:50908.service: Deactivated successfully. Jan 29 11:55:59.552962 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:55:59.555122 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:55:59.561147 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:50914.service - OpenSSH per-connection server daemon (10.0.0.1:50914). Jan 29 11:55:59.562729 systemd-logind[1438]: Removed session 19. Jan 29 11:55:59.596521 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 50914 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:59.598854 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:59.604132 systemd-logind[1438]: New session 20 of user core. Jan 29 11:55:59.610011 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:55:59.924670 sshd[5783]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:59.941043 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:50914.service: Deactivated successfully. Jan 29 11:55:59.943494 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:55:59.945580 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:55:59.951184 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:50928.service - OpenSSH per-connection server daemon (10.0.0.1:50928). Jan 29 11:55:59.953893 systemd-logind[1438]: Removed session 20. Jan 29 11:56:00.001209 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 50928 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:00.004075 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:00.012554 systemd-logind[1438]: New session 21 of user core. Jan 29 11:56:00.021004 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:56:00.049073 systemd-networkd[1376]: cali5b1b1fcee45: Gained IPv6LL Jan 29 11:56:00.855645 sshd[5816]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:00.865107 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:50928.service: Deactivated successfully. Jan 29 11:56:00.867466 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:56:00.872417 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:56:00.879162 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:50938.service - OpenSSH per-connection server daemon (10.0.0.1:50938). Jan 29 11:56:00.882308 systemd-logind[1438]: Removed session 21. Jan 29 11:56:00.918498 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 50938 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:00.920714 sshd[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:00.925962 kubelet[2494]: E0129 11:56:00.925923 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:00.926832 systemd-logind[1438]: New session 22 of user core. Jan 29 11:56:00.932968 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:56:01.221785 sshd[5866]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:01.233399 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:50938.service: Deactivated successfully. Jan 29 11:56:01.238472 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:56:01.242392 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:56:01.255453 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:59250.service - OpenSSH per-connection server daemon (10.0.0.1:59250). Jan 29 11:56:01.258360 systemd-logind[1438]: Removed session 22. Jan 29 11:56:01.301890 sshd[5912]: Accepted publickey for core from 10.0.0.1 port 59250 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:01.303800 sshd[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:01.310331 systemd-logind[1438]: New session 23 of user core. Jan 29 11:56:01.316002 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:56:01.448071 sshd[5912]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:01.453148 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:59250.service: Deactivated successfully. Jan 29 11:56:01.456454 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:56:01.457376 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:56:01.458311 systemd-logind[1438]: Removed session 23. Jan 29 11:56:01.465648 systemd[1]: cri-containerd-f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6.scope: Deactivated successfully. Jan 29 11:56:01.489847 containerd[1460]: time="2025-01-29T11:56:01.489630781Z" level=info msg="shim disconnected" id=f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6 namespace=k8s.io Jan 29 11:56:01.489847 containerd[1460]: time="2025-01-29T11:56:01.489697468Z" level=warning msg="cleaning up after shim disconnected" id=f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6 namespace=k8s.io Jan 29 11:56:01.489847 containerd[1460]: time="2025-01-29T11:56:01.489707177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:56:01.493512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6-rootfs.mount: Deactivated successfully. Jan 29 11:56:01.527445 containerd[1460]: time="2025-01-29T11:56:01.527379270Z" level=info msg="StopContainer for \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\" returns successfully" Jan 29 11:56:01.527985 containerd[1460]: time="2025-01-29T11:56:01.527956129Z" level=info msg="StopPodSandbox for \"594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96\"" Jan 29 11:56:01.528049 containerd[1460]: time="2025-01-29T11:56:01.528009711Z" level=info msg="Container to stop \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:56:01.532066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96-shm.mount: Deactivated successfully. Jan 29 11:56:01.536282 systemd[1]: cri-containerd-594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96.scope: Deactivated successfully. Jan 29 11:56:01.559573 containerd[1460]: time="2025-01-29T11:56:01.559261264Z" level=info msg="shim disconnected" id=594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96 namespace=k8s.io Jan 29 11:56:01.559573 containerd[1460]: time="2025-01-29T11:56:01.559326458Z" level=warning msg="cleaning up after shim disconnected" id=594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96 namespace=k8s.io Jan 29 11:56:01.559573 containerd[1460]: time="2025-01-29T11:56:01.559338250Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:56:01.561746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96-rootfs.mount: Deactivated successfully. Jan 29 11:56:01.583948 containerd[1460]: time="2025-01-29T11:56:01.583891602Z" level=info msg="TearDown network for sandbox \"594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96\" successfully" Jan 29 11:56:01.583948 containerd[1460]: time="2025-01-29T11:56:01.583923493Z" level=info msg="StopPodSandbox for \"594de2674f4a3435bcd3da3238531fbfc197362a1d2cf178ee95c58144786c96\" returns successfully" Jan 29 11:56:01.670882 kubelet[2494]: I0129 11:56:01.670786 2494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d788k\" (UniqueName: \"kubernetes.io/projected/666715fe-8a5c-41bc-946d-1fd9b726994c-kube-api-access-d788k\") pod \"666715fe-8a5c-41bc-946d-1fd9b726994c\" (UID: \"666715fe-8a5c-41bc-946d-1fd9b726994c\") " Jan 29 11:56:01.670882 kubelet[2494]: I0129 11:56:01.670870 2494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/666715fe-8a5c-41bc-946d-1fd9b726994c-tigera-ca-bundle\") pod \"666715fe-8a5c-41bc-946d-1fd9b726994c\" (UID: \"666715fe-8a5c-41bc-946d-1fd9b726994c\") " Jan 29 11:56:01.670882 kubelet[2494]: I0129 11:56:01.670893 2494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/666715fe-8a5c-41bc-946d-1fd9b726994c-typha-certs\") pod \"666715fe-8a5c-41bc-946d-1fd9b726994c\" (UID: \"666715fe-8a5c-41bc-946d-1fd9b726994c\") " Jan 29 11:56:01.677552 kubelet[2494]: I0129 11:56:01.677486 2494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666715fe-8a5c-41bc-946d-1fd9b726994c-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "666715fe-8a5c-41bc-946d-1fd9b726994c" (UID: "666715fe-8a5c-41bc-946d-1fd9b726994c"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 11:56:01.677944 kubelet[2494]: I0129 11:56:01.677858 2494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666715fe-8a5c-41bc-946d-1fd9b726994c-kube-api-access-d788k" (OuterVolumeSpecName: "kube-api-access-d788k") pod "666715fe-8a5c-41bc-946d-1fd9b726994c" (UID: "666715fe-8a5c-41bc-946d-1fd9b726994c"). InnerVolumeSpecName "kube-api-access-d788k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:56:01.677944 kubelet[2494]: I0129 11:56:01.677902 2494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/666715fe-8a5c-41bc-946d-1fd9b726994c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "666715fe-8a5c-41bc-946d-1fd9b726994c" (UID: "666715fe-8a5c-41bc-946d-1fd9b726994c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:56:01.678093 systemd[1]: var-lib-kubelet-pods-666715fe\x2d8a5c\x2d41bc\x2d946d\x2d1fd9b726994c-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 29 11:56:01.681936 systemd[1]: var-lib-kubelet-pods-666715fe\x2d8a5c\x2d41bc\x2d946d\x2d1fd9b726994c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 29 11:56:01.682066 systemd[1]: var-lib-kubelet-pods-666715fe\x2d8a5c\x2d41bc\x2d946d\x2d1fd9b726994c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd788k.mount: Deactivated successfully. Jan 29 11:56:01.771341 kubelet[2494]: I0129 11:56:01.771274 2494 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/666715fe-8a5c-41bc-946d-1fd9b726994c-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 11:56:01.771341 kubelet[2494]: I0129 11:56:01.771325 2494 reconciler_common.go:299] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/666715fe-8a5c-41bc-946d-1fd9b726994c-typha-certs\") on node \"localhost\" DevicePath \"\"" Jan 29 11:56:01.771341 kubelet[2494]: I0129 11:56:01.771338 2494 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d788k\" (UniqueName: \"kubernetes.io/projected/666715fe-8a5c-41bc-946d-1fd9b726994c-kube-api-access-d788k\") on node \"localhost\" DevicePath \"\"" Jan 29 11:56:01.934618 systemd[1]: Removed slice kubepods-besteffort-pod666715fe_8a5c_41bc_946d_1fd9b726994c.slice - libcontainer container kubepods-besteffort-pod666715fe_8a5c_41bc_946d_1fd9b726994c.slice. Jan 29 11:56:02.580705 kubelet[2494]: I0129 11:56:02.580659 2494 scope.go:117] "RemoveContainer" containerID="f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6" Jan 29 11:56:02.582291 containerd[1460]: time="2025-01-29T11:56:02.582252082Z" level=info msg="RemoveContainer for \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\"" Jan 29 11:56:02.588901 containerd[1460]: time="2025-01-29T11:56:02.588830729Z" level=info msg="RemoveContainer for \"f126d5a951b3c5b3b89e9aeb695dcabacac07ef9e29835f90c4555ec8eb5c6f6\" returns successfully" Jan 29 11:56:02.926873 kubelet[2494]: E0129 11:56:02.926652 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:03.928424 kubelet[2494]: I0129 11:56:03.928359 2494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="666715fe-8a5c-41bc-946d-1fd9b726994c" path="/var/lib/kubelet/pods/666715fe-8a5c-41bc-946d-1fd9b726994c/volumes" Jan 29 11:56:05.926217 kubelet[2494]: E0129 11:56:05.926175 2494 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:06.462221 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:59266.service - OpenSSH per-connection server daemon (10.0.0.1:59266). Jan 29 11:56:06.503762 sshd[6099]: Accepted publickey for core from 10.0.0.1 port 59266 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:06.505607 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:06.510200 systemd-logind[1438]: New session 24 of user core. Jan 29 11:56:06.515952 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:56:06.637719 sshd[6099]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:06.642959 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:59266.service: Deactivated successfully. Jan 29 11:56:06.646269 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:56:06.647224 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:56:06.648333 systemd-logind[1438]: Removed session 24. Jan 29 11:56:11.659301 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:37948.service - OpenSSH per-connection server daemon (10.0.0.1:37948). Jan 29 11:56:11.701306 sshd[6216]: Accepted publickey for core from 10.0.0.1 port 37948 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:11.704379 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:11.710197 systemd-logind[1438]: New session 25 of user core. Jan 29 11:56:11.718011 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:56:11.843613 sshd[6216]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:11.847682 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:37948.service: Deactivated successfully. Jan 29 11:56:11.849867 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:56:11.850475 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:56:11.851484 systemd-logind[1438]: Removed session 25. Jan 29 11:56:16.856319 systemd[1]: Started sshd@25-10.0.0.98:22-10.0.0.1:37956.service - OpenSSH per-connection server daemon (10.0.0.1:37956). Jan 29 11:56:16.896010 sshd[6326]: Accepted publickey for core from 10.0.0.1 port 37956 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:16.897728 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:16.902306 systemd-logind[1438]: New session 26 of user core. Jan 29 11:56:16.912979 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:56:17.036144 sshd[6326]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:17.041863 systemd[1]: sshd@25-10.0.0.98:22-10.0.0.1:37956.service: Deactivated successfully. Jan 29 11:56:17.044399 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:56:17.045254 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:56:17.046449 systemd-logind[1438]: Removed session 26. Jan 29 11:56:22.060386 systemd[1]: Started sshd@26-10.0.0.98:22-10.0.0.1:48494.service - OpenSSH per-connection server daemon (10.0.0.1:48494). Jan 29 11:56:22.105118 sshd[6463]: Accepted publickey for core from 10.0.0.1 port 48494 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:22.107612 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:22.115487 systemd-logind[1438]: New session 27 of user core. Jan 29 11:56:22.124150 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:56:22.257395 sshd[6463]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:22.262126 systemd[1]: sshd@26-10.0.0.98:22-10.0.0.1:48494.service: Deactivated successfully. Jan 29 11:56:22.264933 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:56:22.265715 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:56:22.266671 systemd-logind[1438]: Removed session 27.