Jan 29 11:44:24.903544 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:44:24.903574 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:44:24.903585 kernel: BIOS-provided physical RAM map: Jan 29 11:44:24.903591 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:44:24.903597 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:44:24.903603 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:44:24.903610 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:44:24.903617 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:44:24.903623 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 29 11:44:24.903629 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 29 11:44:24.903637 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 29 11:44:24.903643 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 29 11:44:24.903652 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 29 11:44:24.903659 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 29 11:44:24.903669 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 29 11:44:24.903675 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:44:24.903684 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 29 11:44:24.903691 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 29 11:44:24.903698 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:44:24.903704 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:44:24.903711 kernel: NX (Execute Disable) protection: active Jan 29 11:44:24.903717 kernel: APIC: Static calls initialized Jan 29 11:44:24.903724 kernel: efi: EFI v2.7 by EDK II Jan 29 11:44:24.903731 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 29 11:44:24.903737 kernel: SMBIOS 2.8 present. Jan 29 11:44:24.903744 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 29 11:44:24.903750 kernel: Hypervisor detected: KVM Jan 29 11:44:24.903759 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:44:24.903766 kernel: kvm-clock: using sched offset of 4297685847 cycles Jan 29 11:44:24.903773 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:44:24.903780 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:44:24.903787 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:44:24.903794 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:44:24.903801 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 29 11:44:24.903808 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:44:24.903815 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:44:24.903824 kernel: Using GB pages for direct mapping Jan 29 11:44:24.903831 kernel: Secure boot disabled Jan 29 11:44:24.903837 kernel: ACPI: Early table checksum verification disabled Jan 29 11:44:24.903845 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 11:44:24.903858 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:44:24.903871 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:44:24.903882 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:44:24.903892 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 11:44:24.903904 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:44:24.903945 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:44:24.903953 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:44:24.903967 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:44:24.903981 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:44:24.903989 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 11:44:24.904001 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 11:44:24.904008 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 11:44:24.904015 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 11:44:24.904022 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 11:44:24.904029 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 11:44:24.904039 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 11:44:24.904061 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 11:44:24.904076 kernel: No NUMA configuration found Jan 29 11:44:24.904110 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 29 11:44:24.904135 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 29 11:44:24.904143 kernel: Zone ranges: Jan 29 11:44:24.904157 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:44:24.904165 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 29 11:44:24.904172 kernel: Normal empty Jan 29 11:44:24.904185 kernel: Movable zone start for each node Jan 29 11:44:24.904195 kernel: Early memory node ranges Jan 29 11:44:24.904202 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:44:24.904209 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 11:44:24.904216 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 11:44:24.904225 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 29 11:44:24.904233 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 29 11:44:24.904240 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 29 11:44:24.904253 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 29 11:44:24.904271 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:44:24.904282 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:44:24.904296 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 11:44:24.904303 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:44:24.904316 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 29 11:44:24.904326 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 29 11:44:24.904339 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 29 11:44:24.904358 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:44:24.904377 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:44:24.904393 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:44:24.904410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:44:24.904418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:44:24.904433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:44:24.904446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:44:24.904461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:44:24.904469 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:44:24.904482 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:44:24.904495 kernel: TSC deadline timer available Jan 29 11:44:24.904509 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:44:24.904528 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:44:24.904550 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:44:24.904570 kernel: kvm-guest: setup PV sched yield Jan 29 11:44:24.904589 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 11:44:24.904611 kernel: Booting paravirtualized kernel on KVM Jan 29 11:44:24.904625 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:44:24.904645 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:44:24.904664 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:44:24.904683 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:44:24.904699 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:44:24.904707 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:44:24.904723 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:44:24.904742 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:44:24.904772 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:44:24.904790 kernel: random: crng init done Jan 29 11:44:24.904806 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:44:24.904814 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:44:24.904821 kernel: Fallback order for Node 0: 0 Jan 29 11:44:24.904828 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 29 11:44:24.904838 kernel: Policy zone: DMA32 Jan 29 11:44:24.904847 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:44:24.904861 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 29 11:44:24.904868 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:44:24.904879 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:44:24.904898 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:44:24.904941 kernel: Dynamic Preempt: voluntary Jan 29 11:44:24.904968 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:44:24.904981 kernel: rcu: RCU event tracing is enabled. Jan 29 11:44:24.904989 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:44:24.905003 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:44:24.905013 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:44:24.905027 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:44:24.905042 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:44:24.905052 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:44:24.905066 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:44:24.905088 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:44:24.905104 kernel: Console: colour dummy device 80x25 Jan 29 11:44:24.905112 kernel: printk: console [ttyS0] enabled Jan 29 11:44:24.905122 kernel: ACPI: Core revision 20230628 Jan 29 11:44:24.905141 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:44:24.905160 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:44:24.905177 kernel: x2apic enabled Jan 29 11:44:24.905196 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:44:24.905204 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:44:24.905217 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:44:24.905225 kernel: kvm-guest: setup PV IPIs Jan 29 11:44:24.905233 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:44:24.905246 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:44:24.905255 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:44:24.905269 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:44:24.905279 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:44:24.905286 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:44:24.905294 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:44:24.905301 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:44:24.905309 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:44:24.905316 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:44:24.905327 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:44:24.905334 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:44:24.905342 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:44:24.905349 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:44:24.905359 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:44:24.905367 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:44:24.905375 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:44:24.905382 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:44:24.905392 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:44:24.905399 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:44:24.905407 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:44:24.905414 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:44:24.905422 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:44:24.905429 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:44:24.905437 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:44:24.905444 kernel: landlock: Up and running. Jan 29 11:44:24.905451 kernel: SELinux: Initializing. Jan 29 11:44:24.905461 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:44:24.905468 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:44:24.905476 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:44:24.905484 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:44:24.905491 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:44:24.905499 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:44:24.905507 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:44:24.905514 kernel: ... version: 0 Jan 29 11:44:24.905521 kernel: ... bit width: 48 Jan 29 11:44:24.905531 kernel: ... generic registers: 6 Jan 29 11:44:24.905538 kernel: ... value mask: 0000ffffffffffff Jan 29 11:44:24.905555 kernel: ... max period: 00007fffffffffff Jan 29 11:44:24.905563 kernel: ... fixed-purpose events: 0 Jan 29 11:44:24.905571 kernel: ... event mask: 000000000000003f Jan 29 11:44:24.905578 kernel: signal: max sigframe size: 1776 Jan 29 11:44:24.905585 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:44:24.905593 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:44:24.905600 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:44:24.905610 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:44:24.905618 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:44:24.905635 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:44:24.905654 kernel: smpboot: Max logical packages: 1 Jan 29 11:44:24.905681 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:44:24.905698 kernel: devtmpfs: initialized Jan 29 11:44:24.905712 kernel: x86/mm: Memory block size: 128MB Jan 29 11:44:24.905727 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 11:44:24.905742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 11:44:24.905752 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 29 11:44:24.905769 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 11:44:24.905784 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 11:44:24.905791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:44:24.905799 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:44:24.905806 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:44:24.905814 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:44:24.905821 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:44:24.905829 kernel: audit: type=2000 audit(1738151063.491:1): state=initialized audit_enabled=0 res=1 Jan 29 11:44:24.905848 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:44:24.905862 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:44:24.905870 kernel: cpuidle: using governor menu Jan 29 11:44:24.905877 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:44:24.905884 kernel: dca service started, version 1.12.1 Jan 29 11:44:24.905892 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:44:24.905903 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:44:24.905933 kernel: PCI: Using configuration type 1 for base access Jan 29 11:44:24.905948 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:44:24.905958 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:44:24.905966 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:44:24.905973 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:44:24.905980 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:44:24.905988 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:44:24.905995 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:44:24.906009 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:44:24.906016 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:44:24.906024 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:44:24.906034 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:44:24.906041 kernel: ACPI: Interpreter enabled Jan 29 11:44:24.906049 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:44:24.906056 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:44:24.906064 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:44:24.906071 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:44:24.906081 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:44:24.906089 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:44:24.906378 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:44:24.906553 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:44:24.906679 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:44:24.906690 kernel: PCI host bridge to bus 0000:00 Jan 29 11:44:24.906812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:44:24.906947 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:44:24.907076 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:44:24.907191 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:44:24.907947 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:44:24.908128 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 29 11:44:24.908273 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:44:24.908429 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:44:24.908570 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:44:24.908700 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 11:44:24.908838 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 11:44:24.908988 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 11:44:24.909221 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 11:44:24.909381 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:44:24.909518 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:44:24.909650 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 11:44:24.909778 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 11:44:24.909949 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 29 11:44:24.910187 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:44:24.910390 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 11:44:24.910673 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 11:44:24.910907 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 29 11:44:24.911125 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:44:24.911254 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 11:44:24.911386 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 11:44:24.911507 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 29 11:44:24.911626 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 11:44:24.911753 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:44:24.911900 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:44:24.912054 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:44:24.912214 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 11:44:24.913047 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 11:44:24.913194 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:44:24.913332 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 11:44:24.913344 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:44:24.913352 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:44:24.913360 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:44:24.913372 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:44:24.913380 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:44:24.913387 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:44:24.913394 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:44:24.913402 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:44:24.913409 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:44:24.913417 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:44:24.913424 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:44:24.913432 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:44:24.913441 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:44:24.913449 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:44:24.913456 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:44:24.913463 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:44:24.913471 kernel: iommu: Default domain type: Translated Jan 29 11:44:24.913478 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:44:24.913486 kernel: efivars: Registered efivars operations Jan 29 11:44:24.913493 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:44:24.913501 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:44:24.913511 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 11:44:24.913519 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 29 11:44:24.913528 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 29 11:44:24.913536 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 29 11:44:24.913680 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:44:24.913801 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:44:24.913932 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:44:24.913943 kernel: vgaarb: loaded Jan 29 11:44:24.913951 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:44:24.913962 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:44:24.913970 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:44:24.913978 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:44:24.913985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:44:24.913993 kernel: pnp: PnP ACPI init Jan 29 11:44:24.914123 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:44:24.914134 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:44:24.914142 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:44:24.914153 kernel: NET: Registered PF_INET protocol family Jan 29 11:44:24.914161 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:44:24.914168 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:44:24.914176 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:44:24.914184 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:44:24.914191 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:44:24.914199 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:44:24.914206 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:44:24.914214 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:44:24.914223 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:44:24.914231 kernel: NET: Registered PF_XDP protocol family Jan 29 11:44:24.914363 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 11:44:24.914485 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 11:44:24.914596 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:44:24.914723 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:44:24.914833 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:44:24.914958 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:44:24.915072 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:44:24.915182 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 29 11:44:24.915191 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:44:24.915199 kernel: Initialise system trusted keyrings Jan 29 11:44:24.915206 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:44:24.915214 kernel: Key type asymmetric registered Jan 29 11:44:24.915221 kernel: Asymmetric key parser 'x509' registered Jan 29 11:44:24.915229 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:44:24.915236 kernel: io scheduler mq-deadline registered Jan 29 11:44:24.915247 kernel: io scheduler kyber registered Jan 29 11:44:24.915255 kernel: io scheduler bfq registered Jan 29 11:44:24.915269 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:44:24.915277 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:44:24.915285 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:44:24.915292 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:44:24.915300 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:44:24.915307 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:44:24.915315 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:44:24.915325 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:44:24.915332 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:44:24.915513 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:44:24.915526 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:44:24.915640 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:44:24.915753 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:44:24 UTC (1738151064) Jan 29 11:44:24.915887 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:44:24.915900 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:44:24.915908 kernel: efifb: probing for efifb Jan 29 11:44:24.915927 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 29 11:44:24.915935 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 29 11:44:24.915943 kernel: efifb: scrolling: redraw Jan 29 11:44:24.915950 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 29 11:44:24.915958 kernel: Console: switching to colour frame buffer device 100x37 Jan 29 11:44:24.915982 kernel: fb0: EFI VGA frame buffer device Jan 29 11:44:24.915992 kernel: pstore: Using crash dump compression: deflate Jan 29 11:44:24.916002 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:44:24.916010 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:44:24.916017 kernel: Segment Routing with IPv6 Jan 29 11:44:24.916025 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:44:24.916033 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:44:24.916040 kernel: Key type dns_resolver registered Jan 29 11:44:24.916048 kernel: IPI shorthand broadcast: enabled Jan 29 11:44:24.916056 kernel: sched_clock: Marking stable (1072002850, 136199734)->(1274528280, -66325696) Jan 29 11:44:24.916064 kernel: registered taskstats version 1 Jan 29 11:44:24.916071 kernel: Loading compiled-in X.509 certificates Jan 29 11:44:24.916082 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:44:24.916089 kernel: Key type .fscrypt registered Jan 29 11:44:24.916097 kernel: Key type fscrypt-provisioning registered Jan 29 11:44:24.916105 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:44:24.916113 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:44:24.916120 kernel: ima: No architecture policies found Jan 29 11:44:24.916128 kernel: clk: Disabling unused clocks Jan 29 11:44:24.916135 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:44:24.916145 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:44:24.916153 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:44:24.916161 kernel: Run /init as init process Jan 29 11:44:24.916168 kernel: with arguments: Jan 29 11:44:24.916176 kernel: /init Jan 29 11:44:24.916184 kernel: with environment: Jan 29 11:44:24.916191 kernel: HOME=/ Jan 29 11:44:24.916199 kernel: TERM=linux Jan 29 11:44:24.916206 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:44:24.916221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:44:24.916231 systemd[1]: Detected virtualization kvm. Jan 29 11:44:24.916240 systemd[1]: Detected architecture x86-64. Jan 29 11:44:24.916248 systemd[1]: Running in initrd. Jan 29 11:44:24.916266 systemd[1]: No hostname configured, using default hostname. Jan 29 11:44:24.916274 systemd[1]: Hostname set to . Jan 29 11:44:24.916283 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:44:24.916292 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:44:24.916300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:44:24.916308 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:44:24.916317 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:44:24.916326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:44:24.916337 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:44:24.916345 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:44:24.916355 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:44:24.916364 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:44:24.916372 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:44:24.916381 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:44:24.916389 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:44:24.916399 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:44:24.916407 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:44:24.916416 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:44:24.916424 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:44:24.916432 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:44:24.916440 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:44:24.916449 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:44:24.916457 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:44:24.916465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:44:24.916476 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:44:24.916484 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:44:24.916492 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:44:24.916500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:44:24.916509 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:44:24.916517 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:44:24.916525 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:44:24.916533 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:44:24.916544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:44:24.916552 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:44:24.916561 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:44:24.916569 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:44:24.916595 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 11:44:24.916616 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:44:24.916625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:44:24.916633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:44:24.916644 systemd-journald[193]: Journal started Jan 29 11:44:24.916661 systemd-journald[193]: Runtime Journal (/run/log/journal/d8664bad6509487b827a9263ef52207e) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:44:24.911064 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:44:24.920229 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:44:24.918585 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:44:24.920785 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:44:24.924515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:44:24.938860 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:44:24.940576 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:44:24.943993 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:44:24.947907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:44:24.950834 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:44:24.952277 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:44:24.953214 kernel: Bridge firewalling registered Jan 29 11:44:24.953326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:44:24.958063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:44:24.962934 dracut-cmdline[220]: dracut-dracut-053 Jan 29 11:44:24.966078 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:44:24.969035 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:44:24.980059 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:44:25.009571 systemd-resolved[240]: Positive Trust Anchors: Jan 29 11:44:25.009598 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:44:25.009628 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:44:25.012496 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 29 11:44:25.013672 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:44:25.019547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:44:25.072960 kernel: SCSI subsystem initialized Jan 29 11:44:25.081943 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:44:25.091940 kernel: iscsi: registered transport (tcp) Jan 29 11:44:25.112935 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:44:25.112962 kernel: QLogic iSCSI HBA Driver Jan 29 11:44:25.158492 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:44:25.166164 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:44:25.191244 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:44:25.191302 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:44:25.192284 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:44:25.232027 kernel: raid6: avx2x4 gen() 30698 MB/s Jan 29 11:44:25.248939 kernel: raid6: avx2x2 gen() 30890 MB/s Jan 29 11:44:25.266024 kernel: raid6: avx2x1 gen() 25765 MB/s Jan 29 11:44:25.266045 kernel: raid6: using algorithm avx2x2 gen() 30890 MB/s Jan 29 11:44:25.284141 kernel: raid6: .... xor() 18804 MB/s, rmw enabled Jan 29 11:44:25.284232 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:44:25.307024 kernel: xor: automatically using best checksumming function avx Jan 29 11:44:25.468953 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:44:25.481734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:44:25.490156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:44:25.503053 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 29 11:44:25.507872 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:44:25.527081 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:44:25.540716 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 29 11:44:25.575441 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:44:25.589126 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:44:25.652134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:44:25.664327 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:44:25.677274 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:44:25.680093 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:44:25.681453 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:44:25.685504 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:44:25.689933 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:44:25.723046 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:44:25.723063 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:44:25.723262 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:44:25.723274 kernel: GPT:9289727 != 19775487 Jan 29 11:44:25.723297 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:44:25.723315 kernel: GPT:9289727 != 19775487 Jan 29 11:44:25.723326 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:44:25.723337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:44:25.723347 kernel: libata version 3.00 loaded. Jan 29 11:44:25.723357 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:44:25.723367 kernel: AES CTR mode by8 optimization enabled Jan 29 11:44:25.708100 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:44:25.712676 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:44:25.712831 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:44:25.714305 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:44:25.715502 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:44:25.731387 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:44:25.765318 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:44:25.765339 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:44:25.765531 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:44:25.765744 kernel: scsi host0: ahci Jan 29 11:44:25.765900 kernel: scsi host1: ahci Jan 29 11:44:25.766076 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (473) Jan 29 11:44:25.766091 kernel: scsi host2: ahci Jan 29 11:44:25.766273 kernel: scsi host3: ahci Jan 29 11:44:25.766419 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Jan 29 11:44:25.766431 kernel: scsi host4: ahci Jan 29 11:44:25.766577 kernel: scsi host5: ahci Jan 29 11:44:25.766756 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 11:44:25.766767 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 11:44:25.766778 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 11:44:25.766788 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 11:44:25.766798 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 11:44:25.766808 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 11:44:25.715706 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:44:25.718415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:44:25.734175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:44:25.736565 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:44:25.740727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:44:25.740867 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:44:25.768753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:44:25.775939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:44:25.786395 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:44:25.787687 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:44:25.795434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:44:25.805048 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:44:25.806844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:44:25.812414 disk-uuid[564]: Primary Header is updated. Jan 29 11:44:25.812414 disk-uuid[564]: Secondary Entries is updated. Jan 29 11:44:25.812414 disk-uuid[564]: Secondary Header is updated. Jan 29 11:44:25.815941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:44:25.819933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:44:25.822690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:44:25.834175 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:44:25.850388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:44:26.074949 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:44:26.075025 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:44:26.075941 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:44:26.076939 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:44:26.077942 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:44:26.077967 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:44:26.079346 kernel: ata3.00: applying bridge limits Jan 29 11:44:26.079358 kernel: ata3.00: configured for UDMA/100 Jan 29 11:44:26.079940 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:44:26.083943 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:44:26.118515 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:44:26.130567 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:44:26.130590 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:44:26.821653 disk-uuid[566]: The operation has completed successfully. Jan 29 11:44:26.823248 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:44:26.847728 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:44:26.847854 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:44:26.876163 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:44:26.879477 sh[595]: Success Jan 29 11:44:26.892946 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:44:26.927498 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:44:26.937473 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:44:26.942428 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:44:26.951623 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:44:26.951658 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:44:26.951669 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:44:26.953382 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:44:26.953399 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:44:26.958399 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:44:26.959117 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:44:26.973147 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:44:26.975729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:44:26.983461 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:44:26.983510 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:44:26.983521 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:44:26.986948 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:44:26.996041 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:44:26.997940 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:44:27.006933 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:44:27.015187 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:44:27.067762 ignition[690]: Ignition 2.19.0 Jan 29 11:44:27.067774 ignition[690]: Stage: fetch-offline Jan 29 11:44:27.067809 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:44:27.067819 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:44:27.067927 ignition[690]: parsed url from cmdline: "" Jan 29 11:44:27.067931 ignition[690]: no config URL provided Jan 29 11:44:27.067936 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:44:27.067946 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:44:27.067974 ignition[690]: op(1): [started] loading QEMU firmware config module Jan 29 11:44:27.067979 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:44:27.076138 ignition[690]: op(1): [finished] loading QEMU firmware config module Jan 29 11:44:27.095180 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:44:27.108131 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:44:27.117697 ignition[690]: parsing config with SHA512: 2e71e2edf09e3297fe988475c0ec72992f00244724ba7a5dc54fc7b81b079fc3cc800ee86ded500b47cc52e9a2c524fe05ff40901975240437c7338fea6d77d5 Jan 29 11:44:27.121550 unknown[690]: fetched base config from "system" Jan 29 11:44:27.121759 unknown[690]: fetched user config from "qemu" Jan 29 11:44:27.122118 ignition[690]: fetch-offline: fetch-offline passed Jan 29 11:44:27.122182 ignition[690]: Ignition finished successfully Jan 29 11:44:27.127110 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:44:27.128837 systemd-networkd[785]: lo: Link UP Jan 29 11:44:27.128841 systemd-networkd[785]: lo: Gained carrier Jan 29 11:44:27.130591 systemd-networkd[785]: Enumeration completed Jan 29 11:44:27.131010 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:44:27.131014 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:44:27.131998 systemd-networkd[785]: eth0: Link UP Jan 29 11:44:27.132002 systemd-networkd[785]: eth0: Gained carrier Jan 29 11:44:27.132009 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:44:27.132111 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:44:27.137957 systemd[1]: Reached target network.target - Network. Jan 29 11:44:27.141818 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:44:27.164969 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:44:27.166510 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:44:27.178121 ignition[788]: Ignition 2.19.0 Jan 29 11:44:27.178133 ignition[788]: Stage: kargs Jan 29 11:44:27.178316 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:44:27.178327 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:44:27.179109 ignition[788]: kargs: kargs passed Jan 29 11:44:27.179155 ignition[788]: Ignition finished successfully Jan 29 11:44:27.186688 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:44:27.198241 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:44:27.212264 ignition[797]: Ignition 2.19.0 Jan 29 11:44:27.212280 ignition[797]: Stage: disks Jan 29 11:44:27.212467 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:44:27.212479 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:44:27.216265 ignition[797]: disks: disks passed Jan 29 11:44:27.216909 ignition[797]: Ignition finished successfully Jan 29 11:44:27.220006 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:44:27.221327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:44:27.223128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:44:27.224375 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:44:27.226458 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:44:27.227511 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:44:27.237056 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:44:27.247714 systemd-resolved[240]: Detected conflict on linux IN A 10.0.0.12 Jan 29 11:44:27.247730 systemd-resolved[240]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 29 11:44:27.250483 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:44:27.256950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:44:27.271028 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:44:27.357949 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:44:27.358910 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:44:27.360511 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:44:27.370003 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:44:27.371778 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:44:27.372866 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:44:27.372927 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:44:27.381698 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Jan 29 11:44:27.372958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:44:27.388042 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:44:27.388070 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:44:27.388081 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:44:27.388092 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:44:27.380105 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:44:27.382363 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:44:27.390113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:44:27.421459 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:44:27.425656 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:44:27.429832 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:44:27.433781 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:44:27.524574 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:44:27.537057 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:44:27.538165 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:44:27.547965 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:44:27.566286 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:44:27.633481 ignition[932]: INFO : Ignition 2.19.0 Jan 29 11:44:27.633481 ignition[932]: INFO : Stage: mount Jan 29 11:44:27.635178 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:44:27.635178 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:44:27.635178 ignition[932]: INFO : mount: mount passed Jan 29 11:44:27.635178 ignition[932]: INFO : Ignition finished successfully Jan 29 11:44:27.639119 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:44:27.648009 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:44:27.951689 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:44:27.964139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:44:27.971772 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Jan 29 11:44:27.971806 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:44:27.971818 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:44:27.973280 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:44:27.975938 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:44:27.978419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:44:28.010078 ignition[960]: INFO : Ignition 2.19.0 Jan 29 11:44:28.010078 ignition[960]: INFO : Stage: files Jan 29 11:44:28.021066 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:44:28.021066 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:44:28.021066 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:44:28.024317 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:44:28.024317 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:44:28.029381 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:44:28.030879 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:44:28.030879 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:44:28.030097 unknown[960]: wrote ssh authorized keys file for user: core Jan 29 11:44:28.035326 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:44:28.035326 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:44:28.072145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:44:28.221384 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:44:28.221384 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:44:28.225303 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:44:28.225303 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:44:28.228842 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:44:28.231562 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:44:28.233460 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:44:28.235223 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:44:28.237052 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:44:28.239089 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:44:28.240955 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:44:28.242751 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:44:28.245395 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:44:28.247832 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:44:28.250096 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:44:28.660285 systemd-networkd[785]: eth0: Gained IPv6LL Jan 29 11:44:28.739982 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:44:29.232488 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:44:29.232488 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:44:29.236308 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:44:29.238347 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:44:29.238347 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:44:29.238347 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:44:29.238347 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:44:29.238347 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:44:29.238347 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:44:29.238347 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:44:29.345370 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:44:29.351813 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:44:29.353420 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:44:29.353420 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:44:29.353420 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:44:29.353420 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:44:29.353420 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:44:29.353420 ignition[960]: INFO : files: files passed Jan 29 11:44:29.353420 ignition[960]: INFO : Ignition finished successfully Jan 29 11:44:29.365231 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:44:29.382070 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:44:29.384007 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:44:29.391786 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:44:29.391910 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:44:29.397066 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:44:29.401747 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:44:29.403675 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:44:29.403675 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:44:29.408521 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:44:29.408722 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:44:29.419100 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:44:29.442101 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:44:29.442260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:44:29.445432 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:44:29.446464 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:44:29.448801 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:44:29.458093 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:44:29.473628 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:44:29.475033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:44:29.488850 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:44:29.489013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:44:29.493163 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:44:29.494603 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:44:29.494720 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:44:29.498963 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:44:29.500296 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:44:29.502449 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:44:29.504353 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:44:29.506678 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:44:29.510439 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:44:29.512832 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:44:29.515370 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:44:29.517955 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:44:29.520303 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:44:29.521407 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:44:29.521547 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:44:29.526424 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:44:29.526600 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:44:29.528881 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:44:29.529015 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:44:29.531513 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:44:29.531639 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:44:29.537574 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:44:29.537728 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:44:29.538965 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:44:29.541221 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:44:29.546969 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:44:29.547110 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:44:29.551047 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:44:29.552064 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:44:29.552169 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:44:29.554064 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:44:29.554158 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:44:29.556038 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:44:29.556143 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:44:29.556602 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:44:29.556695 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:44:29.566043 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:44:29.568123 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:44:29.571868 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:44:29.573292 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:44:29.576122 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:44:29.577367 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:44:29.583825 ignition[1014]: INFO : Ignition 2.19.0 Jan 29 11:44:29.583825 ignition[1014]: INFO : Stage: umount Jan 29 11:44:29.586129 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:44:29.586129 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:44:29.586129 ignition[1014]: INFO : umount: umount passed Jan 29 11:44:29.586129 ignition[1014]: INFO : Ignition finished successfully Jan 29 11:44:29.586332 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:44:29.586472 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:44:29.589116 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:44:29.589236 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:44:29.592563 systemd[1]: Stopped target network.target - Network. Jan 29 11:44:29.594418 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:44:29.594478 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:44:29.595667 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:44:29.595714 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:44:29.597995 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:44:29.598041 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:44:29.598127 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:44:29.598178 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:44:29.598667 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:44:29.599310 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:44:29.600616 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:44:29.604958 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 29 11:44:29.606872 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:44:29.607063 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:44:29.609896 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:44:29.610101 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:44:29.613171 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:44:29.613227 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:44:29.622018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:44:29.623139 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:44:29.623202 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:44:29.625944 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:44:29.626002 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:44:29.628369 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:44:29.628420 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:44:29.629719 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:44:29.629766 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:44:29.632289 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:44:29.664647 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:44:29.664848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:44:29.667744 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:44:29.667851 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:44:29.670973 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:44:29.671042 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:44:29.671848 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:44:29.671889 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:44:29.674325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:44:29.674374 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:44:29.680345 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:44:29.680394 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:44:29.683569 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:44:29.683617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:44:29.699035 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:44:29.699093 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:44:29.699142 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:44:29.701591 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:44:29.701642 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:44:29.704113 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:44:29.704168 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:44:29.706858 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:44:29.706905 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:44:29.726765 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:44:29.726879 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:44:29.742341 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:44:29.742461 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:44:29.744801 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:44:29.747027 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:44:29.747080 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:44:29.761210 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:44:29.769109 systemd[1]: Switching root. Jan 29 11:44:29.803238 systemd-journald[193]: Journal stopped Jan 29 11:44:30.892657 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 11:44:30.892744 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:44:30.892763 kernel: SELinux: policy capability open_perms=1 Jan 29 11:44:30.892778 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:44:30.892794 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:44:30.892811 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:44:30.892834 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:44:30.892850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:44:30.892865 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:44:30.892881 kernel: audit: type=1403 audit(1738151070.120:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:44:30.892909 systemd[1]: Successfully loaded SELinux policy in 43.203ms. Jan 29 11:44:30.892956 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.974ms. Jan 29 11:44:30.892975 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:44:30.892991 systemd[1]: Detected virtualization kvm. Jan 29 11:44:30.893008 systemd[1]: Detected architecture x86-64. Jan 29 11:44:30.893023 systemd[1]: Detected first boot. Jan 29 11:44:30.893038 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:44:30.893054 zram_generator::config[1059]: No configuration found. Jan 29 11:44:30.893075 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:44:30.893091 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:44:30.893107 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:44:30.893131 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:44:30.893148 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:44:30.893164 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:44:30.893180 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:44:30.893197 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:44:30.893213 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:44:30.893233 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:44:30.893249 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:44:30.893264 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:44:30.893281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:44:30.893297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:44:30.893314 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:44:30.893330 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:44:30.893346 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:44:30.893365 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:44:30.893381 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:44:30.893397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:44:30.893414 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:44:30.893430 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:44:30.893466 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:44:30.893482 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:44:30.893497 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:44:30.893521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:44:30.893536 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:44:30.893551 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:44:30.893566 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:44:30.893581 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:44:30.893597 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:44:30.893612 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:44:30.893628 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:44:30.893643 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:44:30.893658 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:44:30.893676 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:44:30.893691 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:44:30.893707 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:44:30.893722 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:44:30.893737 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:44:30.893753 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:44:30.893769 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:44:30.893785 systemd[1]: Reached target machines.target - Containers. Jan 29 11:44:30.893816 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:44:30.893832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:44:30.893848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:44:30.893862 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:44:30.893877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:44:30.893892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:44:30.893908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:44:30.893936 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:44:30.893955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:44:30.893971 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:44:30.893987 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:44:30.894003 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:44:30.894018 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:44:30.894034 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:44:30.894049 kernel: loop: module loaded Jan 29 11:44:30.894064 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:44:30.894079 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:44:30.894098 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:44:30.894113 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:44:30.894137 kernel: fuse: init (API version 7.39) Jan 29 11:44:30.894152 kernel: ACPI: bus type drm_connector registered Jan 29 11:44:30.894176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:44:30.894192 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:44:30.894207 systemd[1]: Stopped verity-setup.service. Jan 29 11:44:30.894224 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:44:30.894261 systemd-journald[1143]: Collecting audit messages is disabled. Jan 29 11:44:30.894292 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:44:30.894308 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:44:30.894323 systemd-journald[1143]: Journal started Jan 29 11:44:30.894354 systemd-journald[1143]: Runtime Journal (/run/log/journal/d8664bad6509487b827a9263ef52207e) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:44:30.630867 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:44:30.650818 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:44:30.651277 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:44:30.897964 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:44:30.898623 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:44:30.899768 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:44:30.900968 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:44:30.902175 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:44:30.903388 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:44:30.904869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:44:30.906430 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:44:30.906601 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:44:30.908090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:44:30.908263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:44:30.909693 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:44:30.909859 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:44:30.911203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:44:30.911363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:44:30.912880 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:44:30.913064 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:44:30.914554 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:44:30.914768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:44:30.916267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:44:30.917723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:44:30.919314 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:44:30.932662 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:44:30.941020 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:44:30.943410 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:44:30.944749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:44:30.944785 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:44:30.947406 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:44:30.949973 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:44:30.954020 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:44:30.955689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:44:30.957948 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:44:30.961019 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:44:30.962538 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:44:30.964205 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:44:30.966003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:44:30.969740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:44:30.972561 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:44:30.987059 systemd-journald[1143]: Time spent on flushing to /var/log/journal/d8664bad6509487b827a9263ef52207e is 15.639ms for 996 entries. Jan 29 11:44:30.987059 systemd-journald[1143]: System Journal (/var/log/journal/d8664bad6509487b827a9263ef52207e) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:44:31.090423 systemd-journald[1143]: Received client request to flush runtime journal. Jan 29 11:44:31.090459 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 11:44:30.978050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:44:30.981725 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:44:30.983406 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:44:30.985322 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:44:31.004033 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:44:31.007402 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:44:31.089239 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:44:31.090844 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:44:31.094350 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:44:31.094263 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:44:31.096129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:44:31.112147 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:44:31.115327 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 29 11:44:31.115346 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 29 11:44:31.122908 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:44:31.123617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:44:31.126125 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:44:31.129937 kernel: loop1: detected capacity change from 0 to 140768 Jan 29 11:44:31.139230 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:44:31.140814 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:44:31.172244 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 11:44:31.174009 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:44:31.184161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:44:31.204995 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 29 11:44:31.205392 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 29 11:44:31.211705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:44:31.215943 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 11:44:31.251960 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 11:44:31.263949 kernel: loop5: detected capacity change from 0 to 205544 Jan 29 11:44:31.268892 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:44:31.269478 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 29 11:44:31.274307 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:44:31.274323 systemd[1]: Reloading... Jan 29 11:44:31.339950 zram_generator::config[1226]: No configuration found. Jan 29 11:44:31.531141 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:44:31.534736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:44:31.583900 systemd[1]: Reloading finished in 309 ms. Jan 29 11:44:31.616530 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:44:31.618180 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:44:31.632183 systemd[1]: Starting ensure-sysext.service... Jan 29 11:44:31.634420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:44:31.640169 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:44:31.640183 systemd[1]: Reloading... Jan 29 11:44:31.669761 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:44:31.670147 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:44:31.672433 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:44:31.672737 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 29 11:44:31.672825 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 29 11:44:31.682833 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:44:31.682863 systemd-tmpfiles[1264]: Skipping /boot Jan 29 11:44:31.714849 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:44:31.714867 systemd-tmpfiles[1264]: Skipping /boot Jan 29 11:44:31.726953 zram_generator::config[1290]: No configuration found. Jan 29 11:44:31.833006 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:44:31.882270 systemd[1]: Reloading finished in 241 ms. Jan 29 11:44:31.899206 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:44:31.911370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:44:31.920309 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:44:31.922938 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:44:31.925641 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:44:31.929820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:44:31.933065 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:44:31.938890 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:44:31.942865 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:44:31.943494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:44:31.944873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:44:31.949208 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:44:31.952272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:44:31.953449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:44:31.953552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:44:31.954565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:44:31.954744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:44:31.959838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:44:31.960323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:44:31.967142 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:44:31.969206 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:44:31.969419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:44:31.971639 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:44:31.973011 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jan 29 11:44:31.980724 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:44:31.981173 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:44:31.981461 augenrules[1357]: No rules Jan 29 11:44:31.990196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:44:31.992684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:44:31.994066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:44:31.995563 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:44:31.999452 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:44:32.000608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:44:32.001453 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:44:32.003540 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:44:32.005551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:44:32.006422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:44:32.008669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:44:32.009006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:44:32.011888 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:44:32.024075 systemd[1]: Finished ensure-sysext.service. Jan 29 11:44:32.032195 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:44:32.039049 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:44:32.039191 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:44:32.039332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:44:32.046103 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:44:32.057559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:44:32.059808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:44:32.062158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:44:32.063372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:44:32.065315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:44:32.073080 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:44:32.074293 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:44:32.074317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:44:32.074622 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:44:32.076283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:44:32.076778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:44:32.078435 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:44:32.078708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:44:32.080219 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:44:32.080493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:44:32.093007 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Jan 29 11:44:32.091784 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:44:32.094304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:44:32.094496 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:44:32.096816 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:44:32.125595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:44:32.135360 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:44:32.165494 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:44:32.187562 systemd-networkd[1401]: lo: Link UP Jan 29 11:44:32.187585 systemd-networkd[1401]: lo: Gained carrier Jan 29 11:44:32.189570 systemd-networkd[1401]: Enumeration completed Jan 29 11:44:32.189867 systemd-resolved[1333]: Positive Trust Anchors: Jan 29 11:44:32.190037 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:44:32.190337 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:44:32.190342 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:44:32.190418 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:44:32.190453 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:44:32.191477 systemd-networkd[1401]: eth0: Link UP Jan 29 11:44:32.191488 systemd-networkd[1401]: eth0: Gained carrier Jan 29 11:44:32.191503 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:44:32.198813 systemd-resolved[1333]: Defaulting to hostname 'linux'. Jan 29 11:44:32.201945 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:44:32.203143 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:44:32.204541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:44:32.206017 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:44:32.206934 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:44:32.210309 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:44:32.211417 systemd[1]: Reached target network.target - Network. Jan 29 11:44:32.213007 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:44:32.213610 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 29 11:44:32.214442 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:44:32.217859 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:44:32.217938 systemd-timesyncd[1402]: Initial clock synchronization to Wed 2025-01-29 11:44:32.305623 UTC. Jan 29 11:44:32.228930 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 11:44:32.236134 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:44:32.236308 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:44:32.236483 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:44:32.245008 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:44:32.258938 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:44:32.265197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:44:32.324807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:44:32.364607 kernel: kvm_amd: TSC scaling supported Jan 29 11:44:32.364668 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:44:32.364682 kernel: kvm_amd: Nested Paging enabled Jan 29 11:44:32.366398 kernel: kvm_amd: LBR virtualization supported Jan 29 11:44:32.366425 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:44:32.366455 kernel: kvm_amd: Virtual GIF supported Jan 29 11:44:32.385942 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:44:32.419397 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:44:32.432096 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:44:32.447425 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:44:32.479596 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:44:32.481305 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:44:32.482872 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:44:32.484203 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:44:32.492089 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:44:32.493607 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:44:32.495041 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:44:32.496364 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:44:32.497684 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:44:32.497716 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:44:32.498697 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:44:32.500327 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:44:32.503069 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:44:32.519816 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:44:32.522257 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:44:32.523829 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:44:32.525108 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:44:32.526132 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:44:32.527351 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:44:32.527380 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:44:32.528439 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:44:32.532954 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:44:32.530770 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:44:32.536072 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:44:32.540136 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:44:32.541772 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:44:32.545039 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:44:32.545168 jq[1440]: false Jan 29 11:44:32.549057 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:44:32.555677 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:44:32.559070 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:44:32.560620 dbus-daemon[1439]: [system] SELinux support is enabled Jan 29 11:44:32.566075 extend-filesystems[1441]: Found loop3 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found loop4 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found loop5 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found sr0 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found vda Jan 29 11:44:32.567505 extend-filesystems[1441]: Found vda1 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found vda2 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found vda3 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found usr Jan 29 11:44:32.567505 extend-filesystems[1441]: Found vda4 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found vda6 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found vda7 Jan 29 11:44:32.567505 extend-filesystems[1441]: Found vda9 Jan 29 11:44:32.567505 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 29 11:44:32.570373 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:44:32.572600 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:44:32.573234 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:44:32.576427 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:44:32.584018 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:44:32.587117 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:44:32.590786 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:44:32.592253 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 29 11:44:32.602533 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:44:32.602887 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:44:32.603358 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:44:32.603618 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:44:32.610447 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:44:32.615886 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:44:32.620147 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1370) Jan 29 11:44:32.610748 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:44:32.626941 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:44:32.632004 jq[1458]: true Jan 29 11:44:32.635742 update_engine[1455]: I20250129 11:44:32.635371 1455 main.cc:92] Flatcar Update Engine starting Jan 29 11:44:32.637391 update_engine[1455]: I20250129 11:44:32.637012 1455 update_check_scheduler.cc:74] Next update check in 6m10s Jan 29 11:44:32.642614 jq[1472]: true Jan 29 11:44:32.653748 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:44:32.655363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:44:32.655392 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:44:32.656757 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:44:32.656776 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:44:32.665141 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:44:32.677218 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:44:32.677595 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:44:32.677904 systemd-logind[1452]: New seat seat0. Jan 29 11:44:32.678779 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:44:32.687172 tar[1464]: linux-amd64/helm Jan 29 11:44:32.715804 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:44:32.722956 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:44:32.742505 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:44:32.750151 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:44:32.757886 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:44:32.758137 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:44:32.764197 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:44:32.824464 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:44:32.824571 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:44:32.836209 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:44:32.838560 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:44:32.840454 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:44:32.941948 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:44:33.139533 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:44:33.139533 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:44:33.139533 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:44:33.145038 containerd[1466]: time="2025-01-29T11:44:33.139250277Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:44:33.142443 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:44:33.145419 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 29 11:44:33.142851 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:44:33.147030 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:44:33.149129 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:44:33.151643 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:44:33.163779 containerd[1466]: time="2025-01-29T11:44:33.163702587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166026 containerd[1466]: time="2025-01-29T11:44:33.165987657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166026 containerd[1466]: time="2025-01-29T11:44:33.166017932Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:44:33.166101 containerd[1466]: time="2025-01-29T11:44:33.166034878Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:44:33.166282 containerd[1466]: time="2025-01-29T11:44:33.166251978Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:44:33.166282 containerd[1466]: time="2025-01-29T11:44:33.166277665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166369 containerd[1466]: time="2025-01-29T11:44:33.166351535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166391 containerd[1466]: time="2025-01-29T11:44:33.166371377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166612 containerd[1466]: time="2025-01-29T11:44:33.166586796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166643 containerd[1466]: time="2025-01-29T11:44:33.166611652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166643 containerd[1466]: time="2025-01-29T11:44:33.166626086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166643 containerd[1466]: time="2025-01-29T11:44:33.166636691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:44:33.166747 containerd[1466]: time="2025-01-29T11:44:33.166729927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:44:33.167020 containerd[1466]: time="2025-01-29T11:44:33.167001278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:44:33.167153 containerd[1466]: time="2025-01-29T11:44:33.167134391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:44:33.167230 containerd[1466]: time="2025-01-29T11:44:33.167151123Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:44:33.167263 containerd[1466]: time="2025-01-29T11:44:33.167245645Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:44:33.167320 containerd[1466]: time="2025-01-29T11:44:33.167304778Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:44:33.193555 containerd[1466]: time="2025-01-29T11:44:33.193455142Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:44:33.193555 containerd[1466]: time="2025-01-29T11:44:33.193516969Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:44:33.193555 containerd[1466]: time="2025-01-29T11:44:33.193551397Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:44:33.193641 containerd[1466]: time="2025-01-29T11:44:33.193568089Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:44:33.193641 containerd[1466]: time="2025-01-29T11:44:33.193591962Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:44:33.193776 containerd[1466]: time="2025-01-29T11:44:33.193741534Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:44:33.194019 containerd[1466]: time="2025-01-29T11:44:33.193999505Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:44:33.194144 containerd[1466]: time="2025-01-29T11:44:33.194109018Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:44:33.194144 containerd[1466]: time="2025-01-29T11:44:33.194128657Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:44:33.194144 containerd[1466]: time="2025-01-29T11:44:33.194140711Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:44:33.194220 containerd[1466]: time="2025-01-29T11:44:33.194170257Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:44:33.194220 containerd[1466]: time="2025-01-29T11:44:33.194186665Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:44:33.194220 containerd[1466]: time="2025-01-29T11:44:33.194197807Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:44:33.194220 containerd[1466]: time="2025-01-29T11:44:33.194210144Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:44:33.194290 containerd[1466]: time="2025-01-29T11:44:33.194223534Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:44:33.194290 containerd[1466]: time="2025-01-29T11:44:33.194236944Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:44:33.194290 containerd[1466]: time="2025-01-29T11:44:33.194248512Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:44:33.194290 containerd[1466]: time="2025-01-29T11:44:33.194259076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:44:33.194290 containerd[1466]: time="2025-01-29T11:44:33.194279040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194378 containerd[1466]: time="2025-01-29T11:44:33.194292450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194378 containerd[1466]: time="2025-01-29T11:44:33.194304990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194378 containerd[1466]: time="2025-01-29T11:44:33.194323546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194378 containerd[1466]: time="2025-01-29T11:44:33.194336663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194378 containerd[1466]: time="2025-01-29T11:44:33.194354905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194378 containerd[1466]: time="2025-01-29T11:44:33.194366917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194378 containerd[1466]: time="2025-01-29T11:44:33.194379335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194395278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194410725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194423852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194436351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194448444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194463091Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194481444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194493679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194506 containerd[1466]: time="2025-01-29T11:44:33.194504163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:44:33.194664 containerd[1466]: time="2025-01-29T11:44:33.194564844Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:44:33.194664 containerd[1466]: time="2025-01-29T11:44:33.194580595Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:44:33.194664 containerd[1466]: time="2025-01-29T11:44:33.194591058Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:44:33.194664 containerd[1466]: time="2025-01-29T11:44:33.194604540Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:44:33.194664 containerd[1466]: time="2025-01-29T11:44:33.194614496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194664 containerd[1466]: time="2025-01-29T11:44:33.194626560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:44:33.194664 containerd[1466]: time="2025-01-29T11:44:33.194641398Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:44:33.194664 containerd[1466]: time="2025-01-29T11:44:33.194650970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:44:33.194967 containerd[1466]: time="2025-01-29T11:44:33.194895591Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:44:33.195130 containerd[1466]: time="2025-01-29T11:44:33.194968346Z" level=info msg="Connect containerd service" Jan 29 11:44:33.195130 containerd[1466]: time="2025-01-29T11:44:33.195007605Z" level=info msg="using legacy CRI server" Jan 29 11:44:33.195130 containerd[1466]: time="2025-01-29T11:44:33.195014109Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:44:33.195130 containerd[1466]: time="2025-01-29T11:44:33.195102816Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:44:33.196190 containerd[1466]: time="2025-01-29T11:44:33.196063504Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.196305379Z" level=info msg="Start subscribing containerd event" Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.196391454Z" level=info msg="Start recovering state" Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.196498840Z" level=info msg="Start event monitor" Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.196513679Z" level=info msg="Start snapshots syncer" Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.196525226Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.196539355Z" level=info msg="Start streaming server" Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.196653882Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.196731823Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:44:33.197145 containerd[1466]: time="2025-01-29T11:44:33.197075423Z" level=info msg="containerd successfully booted in 0.121244s" Jan 29 11:44:33.196918 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:44:33.321181 tar[1464]: linux-amd64/LICENSE Jan 29 11:44:33.321301 tar[1464]: linux-amd64/README.md Jan 29 11:44:33.341018 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:44:34.036811 systemd-networkd[1401]: eth0: Gained IPv6LL Jan 29 11:44:34.040648 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:44:34.042787 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:44:34.051413 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:44:34.054701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:44:34.057615 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:44:34.079207 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:44:34.079518 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:44:34.081790 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:44:34.084738 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:44:35.010989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:44:35.013032 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:44:35.015771 systemd[1]: Startup finished in 1.203s (kernel) + 5.413s (initrd) + 4.936s (userspace) = 11.553s. Jan 29 11:44:35.017197 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:44:35.689533 kubelet[1552]: E0129 11:44:35.689385 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:44:35.694453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:44:35.694684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:44:35.695036 systemd[1]: kubelet.service: Consumed 1.510s CPU time. Jan 29 11:44:38.984157 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:44:38.985387 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:52652.service - OpenSSH per-connection server daemon (10.0.0.1:52652). Jan 29 11:44:39.032327 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 52652 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:44:39.034405 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:44:39.042146 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:44:39.054144 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:44:39.055929 systemd-logind[1452]: New session 1 of user core. Jan 29 11:44:39.065796 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:44:39.075231 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:44:39.078049 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:44:39.178450 systemd[1569]: Queued start job for default target default.target. Jan 29 11:44:39.188292 systemd[1569]: Created slice app.slice - User Application Slice. Jan 29 11:44:39.188323 systemd[1569]: Reached target paths.target - Paths. Jan 29 11:44:39.188342 systemd[1569]: Reached target timers.target - Timers. Jan 29 11:44:39.190034 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:44:39.203152 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:44:39.203318 systemd[1569]: Reached target sockets.target - Sockets. Jan 29 11:44:39.203342 systemd[1569]: Reached target basic.target - Basic System. Jan 29 11:44:39.203388 systemd[1569]: Reached target default.target - Main User Target. Jan 29 11:44:39.203431 systemd[1569]: Startup finished in 118ms. Jan 29 11:44:39.203886 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:44:39.205934 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:44:39.273472 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:52658.service - OpenSSH per-connection server daemon (10.0.0.1:52658). Jan 29 11:44:39.311681 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 52658 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:44:39.313261 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:44:39.317480 systemd-logind[1452]: New session 2 of user core. Jan 29 11:44:39.331083 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:44:39.386965 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 29 11:44:39.395364 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:52658.service: Deactivated successfully. Jan 29 11:44:39.397462 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:44:39.399251 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:44:39.414233 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:52670.service - OpenSSH per-connection server daemon (10.0.0.1:52670). Jan 29 11:44:39.415494 systemd-logind[1452]: Removed session 2. Jan 29 11:44:39.448723 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 52670 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:44:39.450621 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:44:39.454739 systemd-logind[1452]: New session 3 of user core. Jan 29 11:44:39.464063 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:44:39.514551 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 29 11:44:39.521830 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:52670.service: Deactivated successfully. Jan 29 11:44:39.523419 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:44:39.525196 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:44:39.534296 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:52678.service - OpenSSH per-connection server daemon (10.0.0.1:52678). Jan 29 11:44:39.535208 systemd-logind[1452]: Removed session 3. Jan 29 11:44:39.567148 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 52678 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:44:39.568826 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:44:39.572986 systemd-logind[1452]: New session 4 of user core. Jan 29 11:44:39.587107 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:44:39.641294 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 29 11:44:39.652250 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:52678.service: Deactivated successfully. Jan 29 11:44:39.654214 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:44:39.656334 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:44:39.658107 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:52682.service - OpenSSH per-connection server daemon (10.0.0.1:52682). Jan 29 11:44:39.658993 systemd-logind[1452]: Removed session 4. Jan 29 11:44:39.701555 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 52682 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:44:39.703356 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:44:39.707297 systemd-logind[1452]: New session 5 of user core. Jan 29 11:44:39.717049 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:44:39.775470 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:44:39.775837 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:44:39.806942 sudo[1604]: pam_unix(sudo:session): session closed for user root Jan 29 11:44:39.808833 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 29 11:44:39.834061 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:52682.service: Deactivated successfully. Jan 29 11:44:39.835807 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:44:39.837526 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:44:39.838899 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:52696.service - OpenSSH per-connection server daemon (10.0.0.1:52696). Jan 29 11:44:39.839678 systemd-logind[1452]: Removed session 5. Jan 29 11:44:39.874664 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 52696 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:44:39.876247 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:44:39.880408 systemd-logind[1452]: New session 6 of user core. Jan 29 11:44:39.890047 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:44:39.943375 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:44:39.943789 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:44:39.947613 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 29 11:44:39.955213 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:44:39.955636 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:44:39.974196 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:44:39.976010 auditctl[1616]: No rules Jan 29 11:44:39.976450 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:44:39.976708 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:44:39.979931 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:44:40.011599 augenrules[1634]: No rules Jan 29 11:44:40.013787 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:44:40.015294 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 29 11:44:40.017253 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 29 11:44:40.024721 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:52696.service: Deactivated successfully. Jan 29 11:44:40.026522 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:44:40.028138 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:44:40.038348 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:52712.service - OpenSSH per-connection server daemon (10.0.0.1:52712). Jan 29 11:44:40.039419 systemd-logind[1452]: Removed session 6. Jan 29 11:44:40.070275 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 52712 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:44:40.071808 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:44:40.075724 systemd-logind[1452]: New session 7 of user core. Jan 29 11:44:40.085062 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:44:40.138074 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:44:40.138395 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:44:40.805189 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:44:40.805330 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:44:41.498984 dockerd[1663]: time="2025-01-29T11:44:41.498873656Z" level=info msg="Starting up" Jan 29 11:44:41.802214 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3409113362-merged.mount: Deactivated successfully. Jan 29 11:44:41.835696 dockerd[1663]: time="2025-01-29T11:44:41.835652834Z" level=info msg="Loading containers: start." Jan 29 11:44:41.987968 kernel: Initializing XFRM netlink socket Jan 29 11:44:42.064684 systemd-networkd[1401]: docker0: Link UP Jan 29 11:44:42.086756 dockerd[1663]: time="2025-01-29T11:44:42.086674437Z" level=info msg="Loading containers: done." Jan 29 11:44:42.110383 dockerd[1663]: time="2025-01-29T11:44:42.110315390Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:44:42.110606 dockerd[1663]: time="2025-01-29T11:44:42.110457755Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:44:42.110606 dockerd[1663]: time="2025-01-29T11:44:42.110600109Z" level=info msg="Daemon has completed initialization" Jan 29 11:44:42.151955 dockerd[1663]: time="2025-01-29T11:44:42.151850403Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:44:42.152156 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:44:42.799529 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3616209840-merged.mount: Deactivated successfully. Jan 29 11:44:42.918770 containerd[1466]: time="2025-01-29T11:44:42.918732577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:44:43.616177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679254775.mount: Deactivated successfully. Jan 29 11:44:44.518509 containerd[1466]: time="2025-01-29T11:44:44.518427810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:44.519150 containerd[1466]: time="2025-01-29T11:44:44.519075033Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:44:44.520577 containerd[1466]: time="2025-01-29T11:44:44.520525001Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:44.523846 containerd[1466]: time="2025-01-29T11:44:44.523792572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:44.525313 containerd[1466]: time="2025-01-29T11:44:44.525240239Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.60646649s" Jan 29 11:44:44.525313 containerd[1466]: time="2025-01-29T11:44:44.525305768Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:44:44.527034 containerd[1466]: time="2025-01-29T11:44:44.526996101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:44:45.877207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:44:45.913295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:44:46.136026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:44:46.141117 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:44:46.408148 containerd[1466]: time="2025-01-29T11:44:46.408002918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:46.409183 containerd[1466]: time="2025-01-29T11:44:46.409136510Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:44:46.410407 containerd[1466]: time="2025-01-29T11:44:46.410366598Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:46.413630 containerd[1466]: time="2025-01-29T11:44:46.413591945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:46.415105 containerd[1466]: time="2025-01-29T11:44:46.415052850Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.888020295s" Jan 29 11:44:46.415166 containerd[1466]: time="2025-01-29T11:44:46.415107187Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:44:46.416060 containerd[1466]: time="2025-01-29T11:44:46.416018382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:44:46.422006 kubelet[1879]: E0129 11:44:46.421962 1879 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:44:46.428802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:44:46.429098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:44:47.796463 containerd[1466]: time="2025-01-29T11:44:47.796398161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:47.797327 containerd[1466]: time="2025-01-29T11:44:47.797292517Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:44:47.798701 containerd[1466]: time="2025-01-29T11:44:47.798650275Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:47.801272 containerd[1466]: time="2025-01-29T11:44:47.801249986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:47.802417 containerd[1466]: time="2025-01-29T11:44:47.802376790Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.386327252s" Jan 29 11:44:47.802450 containerd[1466]: time="2025-01-29T11:44:47.802415437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:44:47.802983 containerd[1466]: time="2025-01-29T11:44:47.802955381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:44:48.780502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335827195.mount: Deactivated successfully. Jan 29 11:44:49.300567 containerd[1466]: time="2025-01-29T11:44:49.300507913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:49.301446 containerd[1466]: time="2025-01-29T11:44:49.301407733Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:44:49.302869 containerd[1466]: time="2025-01-29T11:44:49.302822444Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:49.305866 containerd[1466]: time="2025-01-29T11:44:49.305831736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:49.306700 containerd[1466]: time="2025-01-29T11:44:49.306645344Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.503651701s" Jan 29 11:44:49.306738 containerd[1466]: time="2025-01-29T11:44:49.306702414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:44:49.307455 containerd[1466]: time="2025-01-29T11:44:49.307416676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:44:49.854558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318814250.mount: Deactivated successfully. Jan 29 11:44:51.131281 containerd[1466]: time="2025-01-29T11:44:51.131207121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:51.132476 containerd[1466]: time="2025-01-29T11:44:51.132432080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:44:51.134070 containerd[1466]: time="2025-01-29T11:44:51.134022266Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:51.138005 containerd[1466]: time="2025-01-29T11:44:51.137968512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:51.139114 containerd[1466]: time="2025-01-29T11:44:51.139065293Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.83161024s" Jan 29 11:44:51.139114 containerd[1466]: time="2025-01-29T11:44:51.139102250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:44:51.139717 containerd[1466]: time="2025-01-29T11:44:51.139684658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:44:51.885620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073801929.mount: Deactivated successfully. Jan 29 11:44:51.890815 containerd[1466]: time="2025-01-29T11:44:51.890761173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:51.891558 containerd[1466]: time="2025-01-29T11:44:51.891499278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:44:51.892714 containerd[1466]: time="2025-01-29T11:44:51.892685607Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:51.894726 containerd[1466]: time="2025-01-29T11:44:51.894690570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:51.895471 containerd[1466]: time="2025-01-29T11:44:51.895434973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 755.721122ms" Jan 29 11:44:51.895471 containerd[1466]: time="2025-01-29T11:44:51.895465370Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:44:51.895954 containerd[1466]: time="2025-01-29T11:44:51.895934003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:44:52.431394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163103345.mount: Deactivated successfully. Jan 29 11:44:54.737255 containerd[1466]: time="2025-01-29T11:44:54.737176193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:54.738071 containerd[1466]: time="2025-01-29T11:44:54.738029349Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:44:54.739226 containerd[1466]: time="2025-01-29T11:44:54.739172921Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:54.742610 containerd[1466]: time="2025-01-29T11:44:54.742573543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:44:54.743772 containerd[1466]: time="2025-01-29T11:44:54.743726760Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.847707487s" Jan 29 11:44:54.743831 containerd[1466]: time="2025-01-29T11:44:54.743773719Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:44:56.546215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:44:56.557161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:44:56.569584 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:44:56.569677 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:44:56.569959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:44:56.583163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:44:56.610066 systemd[1]: Reloading requested from client PID 2035 ('systemctl') (unit session-7.scope)... Jan 29 11:44:56.610088 systemd[1]: Reloading... Jan 29 11:44:56.702822 zram_generator::config[2074]: No configuration found. Jan 29 11:44:56.847552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:44:56.929980 systemd[1]: Reloading finished in 319 ms. Jan 29 11:44:56.981412 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:44:56.981650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:44:56.984288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:44:57.137470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:44:57.143111 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:44:57.224582 kubelet[2123]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:44:57.224582 kubelet[2123]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:44:57.224582 kubelet[2123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:44:57.225064 kubelet[2123]: I0129 11:44:57.224629 2123 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:44:57.545834 kubelet[2123]: I0129 11:44:57.545722 2123 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:44:57.545834 kubelet[2123]: I0129 11:44:57.545748 2123 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:44:57.545996 kubelet[2123]: I0129 11:44:57.545972 2123 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:44:57.565543 kubelet[2123]: E0129 11:44:57.565511 2123 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:57.567032 kubelet[2123]: I0129 11:44:57.566991 2123 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:44:57.574239 kubelet[2123]: E0129 11:44:57.574198 2123 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:44:57.574239 kubelet[2123]: I0129 11:44:57.574233 2123 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:44:57.580032 kubelet[2123]: I0129 11:44:57.580013 2123 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:44:57.580154 kubelet[2123]: I0129 11:44:57.580126 2123 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:44:57.580312 kubelet[2123]: I0129 11:44:57.580279 2123 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:44:57.580487 kubelet[2123]: I0129 11:44:57.580307 2123 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:44:57.580568 kubelet[2123]: I0129 11:44:57.580507 2123 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:44:57.580568 kubelet[2123]: I0129 11:44:57.580516 2123 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:44:57.580658 kubelet[2123]: I0129 11:44:57.580640 2123 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:44:57.582041 kubelet[2123]: I0129 11:44:57.582009 2123 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:44:57.582080 kubelet[2123]: I0129 11:44:57.582042 2123 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:44:57.582105 kubelet[2123]: I0129 11:44:57.582097 2123 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:44:57.582134 kubelet[2123]: I0129 11:44:57.582124 2123 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:44:57.587423 kubelet[2123]: W0129 11:44:57.587361 2123 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 29 11:44:57.587423 kubelet[2123]: E0129 11:44:57.587418 2123 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:57.587423 kubelet[2123]: W0129 11:44:57.587362 2123 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 29 11:44:57.587568 kubelet[2123]: E0129 11:44:57.587441 2123 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:57.633208 kubelet[2123]: I0129 11:44:57.633182 2123 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:44:57.634556 kubelet[2123]: I0129 11:44:57.634531 2123 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:44:57.635033 kubelet[2123]: W0129 11:44:57.635014 2123 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:44:57.635644 kubelet[2123]: I0129 11:44:57.635617 2123 server.go:1269] "Started kubelet" Jan 29 11:44:57.636770 kubelet[2123]: I0129 11:44:57.636007 2123 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:44:57.636770 kubelet[2123]: I0129 11:44:57.636066 2123 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:44:57.636770 kubelet[2123]: I0129 11:44:57.636416 2123 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:44:57.637233 kubelet[2123]: I0129 11:44:57.637206 2123 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:44:57.639624 kubelet[2123]: I0129 11:44:57.637237 2123 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:44:57.639624 kubelet[2123]: I0129 11:44:57.637397 2123 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:44:57.639624 kubelet[2123]: I0129 11:44:57.638205 2123 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:44:57.639624 kubelet[2123]: I0129 11:44:57.638293 2123 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:44:57.639624 kubelet[2123]: I0129 11:44:57.638342 2123 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:44:57.640506 kubelet[2123]: E0129 11:44:57.639221 2123 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:44:57.640506 kubelet[2123]: E0129 11:44:57.640371 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Jan 29 11:44:57.640506 kubelet[2123]: W0129 11:44:57.640435 2123 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 29 11:44:57.640506 kubelet[2123]: E0129 11:44:57.640477 2123 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:57.640884 kubelet[2123]: I0129 11:44:57.640855 2123 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:44:57.645170 kubelet[2123]: E0129 11:44:57.641058 2123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f2736d4866731 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:44:57.635596081 +0000 UTC m=+0.451392886,LastTimestamp:2025-01-29 11:44:57.635596081 +0000 UTC m=+0.451392886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:44:57.645542 kubelet[2123]: I0129 11:44:57.645507 2123 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:44:57.645542 kubelet[2123]: I0129 11:44:57.645539 2123 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:44:57.645889 kubelet[2123]: E0129 11:44:57.645867 2123 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:44:57.660791 kubelet[2123]: I0129 11:44:57.660743 2123 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:44:57.660981 kubelet[2123]: I0129 11:44:57.660948 2123 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:44:57.660981 kubelet[2123]: I0129 11:44:57.660962 2123 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:44:57.660981 kubelet[2123]: I0129 11:44:57.660981 2123 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:44:57.662458 kubelet[2123]: I0129 11:44:57.662429 2123 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:44:57.662545 kubelet[2123]: I0129 11:44:57.662482 2123 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:44:57.662545 kubelet[2123]: I0129 11:44:57.662505 2123 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:44:57.662545 kubelet[2123]: E0129 11:44:57.662539 2123 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:44:57.663739 kubelet[2123]: W0129 11:44:57.663699 2123 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 29 11:44:57.663739 kubelet[2123]: E0129 11:44:57.663734 2123 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:57.740927 kubelet[2123]: E0129 11:44:57.740872 2123 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:44:57.763209 kubelet[2123]: E0129 11:44:57.763182 2123 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:44:57.840967 kubelet[2123]: E0129 11:44:57.840814 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Jan 29 11:44:57.841985 kubelet[2123]: E0129 11:44:57.841961 2123 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:44:57.912973 kubelet[2123]: I0129 11:44:57.912945 2123 policy_none.go:49] "None policy: Start" Jan 29 11:44:57.913769 kubelet[2123]: I0129 11:44:57.913740 2123 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:44:57.913816 kubelet[2123]: I0129 11:44:57.913781 2123 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:44:57.921343 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:44:57.936825 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:44:57.939680 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:44:57.942388 kubelet[2123]: E0129 11:44:57.942357 2123 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:44:57.945793 kubelet[2123]: I0129 11:44:57.945763 2123 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:44:57.946076 kubelet[2123]: I0129 11:44:57.946050 2123 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:44:57.946115 kubelet[2123]: I0129 11:44:57.946071 2123 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:44:57.946473 kubelet[2123]: I0129 11:44:57.946350 2123 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:44:57.947366 kubelet[2123]: E0129 11:44:57.947341 2123 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:44:57.971196 systemd[1]: Created slice kubepods-burstable-podf25de693ce4f337f40180689e5591bf9.slice - libcontainer container kubepods-burstable-podf25de693ce4f337f40180689e5591bf9.slice. Jan 29 11:44:57.999129 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:44:58.009421 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:44:58.040180 kubelet[2123]: I0129 11:44:58.040141 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:44:58.040180 kubelet[2123]: I0129 11:44:58.040176 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:44:58.040268 kubelet[2123]: I0129 11:44:58.040195 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:44:58.040268 kubelet[2123]: I0129 11:44:58.040259 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f25de693ce4f337f40180689e5591bf9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f25de693ce4f337f40180689e5591bf9\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:44:58.040327 kubelet[2123]: I0129 11:44:58.040312 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f25de693ce4f337f40180689e5591bf9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f25de693ce4f337f40180689e5591bf9\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:44:58.040353 kubelet[2123]: I0129 11:44:58.040341 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:44:58.040378 kubelet[2123]: I0129 11:44:58.040356 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f25de693ce4f337f40180689e5591bf9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f25de693ce4f337f40180689e5591bf9\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:44:58.040426 kubelet[2123]: I0129 11:44:58.040399 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:44:58.040426 kubelet[2123]: I0129 11:44:58.040423 2123 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:44:58.047203 kubelet[2123]: I0129 11:44:58.047174 2123 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:44:58.047527 kubelet[2123]: E0129 11:44:58.047497 2123 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jan 29 11:44:58.242146 kubelet[2123]: E0129 11:44:58.242086 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Jan 29 11:44:58.249119 kubelet[2123]: I0129 11:44:58.249089 2123 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:44:58.249367 kubelet[2123]: E0129 11:44:58.249332 2123 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jan 29 11:44:58.297791 kubelet[2123]: E0129 11:44:58.297739 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:44:58.298627 containerd[1466]: time="2025-01-29T11:44:58.298571458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f25de693ce4f337f40180689e5591bf9,Namespace:kube-system,Attempt:0,}" Jan 29 11:44:58.307760 kubelet[2123]: E0129 11:44:58.307731 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:44:58.308231 containerd[1466]: time="2025-01-29T11:44:58.308194103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:44:58.311546 kubelet[2123]: E0129 11:44:58.311504 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:44:58.312041 containerd[1466]: time="2025-01-29T11:44:58.312005602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:44:58.531669 kubelet[2123]: W0129 11:44:58.531487 2123 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 29 11:44:58.531669 kubelet[2123]: E0129 11:44:58.531606 2123 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:58.651262 kubelet[2123]: I0129 11:44:58.651236 2123 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:44:58.651668 kubelet[2123]: E0129 11:44:58.651618 2123 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jan 29 11:44:58.806956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385634654.mount: Deactivated successfully. Jan 29 11:44:58.970750 kubelet[2123]: W0129 11:44:58.970656 2123 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 29 11:44:58.970750 kubelet[2123]: E0129 11:44:58.970730 2123 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:59.030725 kubelet[2123]: W0129 11:44:59.030632 2123 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 29 11:44:59.030772 kubelet[2123]: E0129 11:44:59.030732 2123 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:59.042599 kubelet[2123]: E0129 11:44:59.042554 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Jan 29 11:44:59.131504 kubelet[2123]: W0129 11:44:59.131427 2123 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 29 11:44:59.131504 kubelet[2123]: E0129 11:44:59.131502 2123 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:59.234147 containerd[1466]: time="2025-01-29T11:44:59.234092471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:44:59.235231 containerd[1466]: time="2025-01-29T11:44:59.235164226Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:44:59.236116 containerd[1466]: time="2025-01-29T11:44:59.236053345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:44:59.237063 containerd[1466]: time="2025-01-29T11:44:59.237030750Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:44:59.238039 containerd[1466]: time="2025-01-29T11:44:59.237995568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:44:59.238962 containerd[1466]: time="2025-01-29T11:44:59.238903229Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:44:59.239666 containerd[1466]: time="2025-01-29T11:44:59.239612349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:44:59.242282 containerd[1466]: time="2025-01-29T11:44:59.242241544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:44:59.243743 containerd[1466]: time="2025-01-29T11:44:59.243711310Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 935.437197ms" Jan 29 11:44:59.244440 containerd[1466]: time="2025-01-29T11:44:59.244411702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 932.299068ms" Jan 29 11:44:59.245194 containerd[1466]: time="2025-01-29T11:44:59.245162465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 946.516569ms" Jan 29 11:44:59.454233 kubelet[2123]: I0129 11:44:59.454081 2123 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:44:59.454885 kubelet[2123]: E0129 11:44:59.454827 2123 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jan 29 11:44:59.513570 containerd[1466]: time="2025-01-29T11:44:59.513376564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:44:59.515937 containerd[1466]: time="2025-01-29T11:44:59.514265393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:44:59.515937 containerd[1466]: time="2025-01-29T11:44:59.514291221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:44:59.515937 containerd[1466]: time="2025-01-29T11:44:59.514479337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:44:59.515937 containerd[1466]: time="2025-01-29T11:44:59.515696964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:44:59.515937 containerd[1466]: time="2025-01-29T11:44:59.515757799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:44:59.516148 containerd[1466]: time="2025-01-29T11:44:59.515777784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:44:59.516148 containerd[1466]: time="2025-01-29T11:44:59.515898581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:44:59.517982 containerd[1466]: time="2025-01-29T11:44:59.517051956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:44:59.517982 containerd[1466]: time="2025-01-29T11:44:59.517122613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:44:59.517982 containerd[1466]: time="2025-01-29T11:44:59.517147749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:44:59.518245 containerd[1466]: time="2025-01-29T11:44:59.518167899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:44:59.570071 systemd[1]: Started cri-containerd-107e2246e8da418a6b24985a2076deb9f9dea6e63146493860a69e1f3fd598a8.scope - libcontainer container 107e2246e8da418a6b24985a2076deb9f9dea6e63146493860a69e1f3fd598a8. Jan 29 11:44:59.574413 systemd[1]: Started cri-containerd-1825cae9c1aef27286650d1d54ca766729461c434af2c69a554c73267d314022.scope - libcontainer container 1825cae9c1aef27286650d1d54ca766729461c434af2c69a554c73267d314022. Jan 29 11:44:59.575992 systemd[1]: Started cri-containerd-5e8cbb27061c43f0fca685eb9b8ce806ac64dab0a8bbc7641944a3f7e99f74b1.scope - libcontainer container 5e8cbb27061c43f0fca685eb9b8ce806ac64dab0a8bbc7641944a3f7e99f74b1. Jan 29 11:44:59.619422 containerd[1466]: time="2025-01-29T11:44:59.619350984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"107e2246e8da418a6b24985a2076deb9f9dea6e63146493860a69e1f3fd598a8\"" Jan 29 11:44:59.621317 kubelet[2123]: E0129 11:44:59.621018 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:44:59.623870 containerd[1466]: time="2025-01-29T11:44:59.623837353Z" level=info msg="CreateContainer within sandbox \"107e2246e8da418a6b24985a2076deb9f9dea6e63146493860a69e1f3fd598a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:44:59.624883 containerd[1466]: time="2025-01-29T11:44:59.624680091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f25de693ce4f337f40180689e5591bf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e8cbb27061c43f0fca685eb9b8ce806ac64dab0a8bbc7641944a3f7e99f74b1\"" Jan 29 11:44:59.626018 kubelet[2123]: E0129 11:44:59.625979 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:44:59.628066 containerd[1466]: time="2025-01-29T11:44:59.628029782Z" level=info msg="CreateContainer within sandbox \"5e8cbb27061c43f0fca685eb9b8ce806ac64dab0a8bbc7641944a3f7e99f74b1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:44:59.628336 containerd[1466]: time="2025-01-29T11:44:59.628308770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"1825cae9c1aef27286650d1d54ca766729461c434af2c69a554c73267d314022\"" Jan 29 11:44:59.629795 kubelet[2123]: E0129 11:44:59.629751 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:44:59.631814 containerd[1466]: time="2025-01-29T11:44:59.631784099Z" level=info msg="CreateContainer within sandbox \"1825cae9c1aef27286650d1d54ca766729461c434af2c69a554c73267d314022\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:44:59.649898 containerd[1466]: time="2025-01-29T11:44:59.649701963Z" level=info msg="CreateContainer within sandbox \"107e2246e8da418a6b24985a2076deb9f9dea6e63146493860a69e1f3fd598a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c3ad7c547b8e58e39dcf2c0d0ac5eefdad9ffecd25d0b0e851334d8a89a59c9\"" Jan 29 11:44:59.650429 containerd[1466]: time="2025-01-29T11:44:59.650360622Z" level=info msg="StartContainer for \"6c3ad7c547b8e58e39dcf2c0d0ac5eefdad9ffecd25d0b0e851334d8a89a59c9\"" Jan 29 11:44:59.662019 containerd[1466]: time="2025-01-29T11:44:59.661852631Z" level=info msg="CreateContainer within sandbox \"5e8cbb27061c43f0fca685eb9b8ce806ac64dab0a8bbc7641944a3f7e99f74b1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd8a1175c46fb5b2277e9e0f585ef5467cf0892e599deb691e22fb349a1b9725\"" Jan 29 11:44:59.662211 containerd[1466]: time="2025-01-29T11:44:59.662174614Z" level=info msg="StartContainer for \"dd8a1175c46fb5b2277e9e0f585ef5467cf0892e599deb691e22fb349a1b9725\"" Jan 29 11:44:59.665551 containerd[1466]: time="2025-01-29T11:44:59.665511817Z" level=info msg="CreateContainer within sandbox \"1825cae9c1aef27286650d1d54ca766729461c434af2c69a554c73267d314022\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc7e305f537598105b8287af448280435df12961490a487f842b69c6ad8823f3\"" Jan 29 11:44:59.665958 containerd[1466]: time="2025-01-29T11:44:59.665936218Z" level=info msg="StartContainer for \"dc7e305f537598105b8287af448280435df12961490a487f842b69c6ad8823f3\"" Jan 29 11:44:59.743081 systemd[1]: Started cri-containerd-6c3ad7c547b8e58e39dcf2c0d0ac5eefdad9ffecd25d0b0e851334d8a89a59c9.scope - libcontainer container 6c3ad7c547b8e58e39dcf2c0d0ac5eefdad9ffecd25d0b0e851334d8a89a59c9. Jan 29 11:44:59.747369 systemd[1]: Started cri-containerd-dc7e305f537598105b8287af448280435df12961490a487f842b69c6ad8823f3.scope - libcontainer container dc7e305f537598105b8287af448280435df12961490a487f842b69c6ad8823f3. Jan 29 11:44:59.749450 systemd[1]: Started cri-containerd-dd8a1175c46fb5b2277e9e0f585ef5467cf0892e599deb691e22fb349a1b9725.scope - libcontainer container dd8a1175c46fb5b2277e9e0f585ef5467cf0892e599deb691e22fb349a1b9725. Jan 29 11:44:59.766086 kubelet[2123]: E0129 11:44:59.766014 2123 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:44:59.803348 containerd[1466]: time="2025-01-29T11:44:59.802899530Z" level=info msg="StartContainer for \"6c3ad7c547b8e58e39dcf2c0d0ac5eefdad9ffecd25d0b0e851334d8a89a59c9\" returns successfully" Jan 29 11:44:59.803825 containerd[1466]: time="2025-01-29T11:44:59.803180913Z" level=info msg="StartContainer for \"dc7e305f537598105b8287af448280435df12961490a487f842b69c6ad8823f3\" returns successfully" Jan 29 11:44:59.804081 containerd[1466]: time="2025-01-29T11:44:59.803236416Z" level=info msg="StartContainer for \"dd8a1175c46fb5b2277e9e0f585ef5467cf0892e599deb691e22fb349a1b9725\" returns successfully" Jan 29 11:45:00.677862 kubelet[2123]: E0129 11:45:00.677818 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:00.678850 kubelet[2123]: E0129 11:45:00.678814 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:00.680265 kubelet[2123]: E0129 11:45:00.680230 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:01.064106 kubelet[2123]: I0129 11:45:01.063604 2123 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:45:01.330209 kubelet[2123]: E0129 11:45:01.330044 2123 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:45:01.429752 kubelet[2123]: I0129 11:45:01.429285 2123 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:45:01.583608 kubelet[2123]: I0129 11:45:01.583476 2123 apiserver.go:52] "Watching apiserver" Jan 29 11:45:01.639043 kubelet[2123]: I0129 11:45:01.639008 2123 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:45:01.684874 kubelet[2123]: E0129 11:45:01.684635 2123 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 11:45:01.684874 kubelet[2123]: E0129 11:45:01.684656 2123 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:45:01.684874 kubelet[2123]: E0129 11:45:01.684659 2123 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 29 11:45:01.684874 kubelet[2123]: E0129 11:45:01.684813 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:01.684874 kubelet[2123]: E0129 11:45:01.684820 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:01.684874 kubelet[2123]: E0129 11:45:01.684834 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:02.684713 kubelet[2123]: E0129 11:45:02.684618 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:03.601221 systemd[1]: Reloading requested from client PID 2398 ('systemctl') (unit session-7.scope)... Jan 29 11:45:03.601251 systemd[1]: Reloading... Jan 29 11:45:03.687948 kubelet[2123]: E0129 11:45:03.684487 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:03.702818 zram_generator::config[2437]: No configuration found. Jan 29 11:45:03.812678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:45:03.915495 systemd[1]: Reloading finished in 313 ms. Jan 29 11:45:03.965128 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:45:03.965289 kubelet[2123]: I0129 11:45:03.964956 2123 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:45:03.975993 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:45:03.976442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:45:03.976558 systemd[1]: kubelet.service: Consumed 1.040s CPU time, 121.4M memory peak, 0B memory swap peak. Jan 29 11:45:03.987254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:45:04.135399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:45:04.140242 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:45:04.183006 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:45:04.183006 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:45:04.183006 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:45:04.183661 kubelet[2482]: I0129 11:45:04.183606 2482 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:45:04.189550 kubelet[2482]: I0129 11:45:04.189528 2482 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:45:04.190166 kubelet[2482]: I0129 11:45:04.189617 2482 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:45:04.190166 kubelet[2482]: I0129 11:45:04.189813 2482 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:45:04.193879 kubelet[2482]: I0129 11:45:04.193861 2482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:45:04.195902 kubelet[2482]: I0129 11:45:04.195884 2482 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:45:04.198967 kubelet[2482]: E0129 11:45:04.198904 2482 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:45:04.198967 kubelet[2482]: I0129 11:45:04.198954 2482 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:45:04.275188 kubelet[2482]: I0129 11:45:04.275133 2482 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:45:04.275330 kubelet[2482]: I0129 11:45:04.275275 2482 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:45:04.275435 kubelet[2482]: I0129 11:45:04.275389 2482 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:45:04.275643 kubelet[2482]: I0129 11:45:04.275425 2482 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:45:04.275643 kubelet[2482]: I0129 11:45:04.275642 2482 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:45:04.275746 kubelet[2482]: I0129 11:45:04.275652 2482 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:45:04.275746 kubelet[2482]: I0129 11:45:04.275696 2482 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:45:04.275840 kubelet[2482]: I0129 11:45:04.275822 2482 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:45:04.275840 kubelet[2482]: I0129 11:45:04.275839 2482 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:45:04.275895 kubelet[2482]: I0129 11:45:04.275875 2482 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:45:04.275895 kubelet[2482]: I0129 11:45:04.275892 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:45:04.276683 kubelet[2482]: I0129 11:45:04.276660 2482 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:45:04.277111 kubelet[2482]: I0129 11:45:04.277076 2482 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:45:04.278080 kubelet[2482]: I0129 11:45:04.277578 2482 server.go:1269] "Started kubelet" Jan 29 11:45:04.281952 kubelet[2482]: I0129 11:45:04.280394 2482 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:45:04.281952 kubelet[2482]: I0129 11:45:04.281627 2482 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:45:04.286070 kubelet[2482]: E0129 11:45:04.284880 2482 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:45:04.286070 kubelet[2482]: I0129 11:45:04.285160 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:45:04.286070 kubelet[2482]: I0129 11:45:04.285451 2482 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:45:04.286070 kubelet[2482]: I0129 11:45:04.285975 2482 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:45:04.286196 kubelet[2482]: E0129 11:45:04.286097 2482 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:45:04.286196 kubelet[2482]: I0129 11:45:04.286146 2482 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:45:04.286353 kubelet[2482]: I0129 11:45:04.286328 2482 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:45:04.287477 kubelet[2482]: I0129 11:45:04.287438 2482 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:45:04.287744 kubelet[2482]: I0129 11:45:04.287683 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:45:04.289906 kubelet[2482]: I0129 11:45:04.289373 2482 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:45:04.293680 kubelet[2482]: I0129 11:45:04.293626 2482 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:45:04.295033 kubelet[2482]: I0129 11:45:04.294995 2482 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:45:04.299306 kubelet[2482]: I0129 11:45:04.299270 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:45:04.301236 kubelet[2482]: I0129 11:45:04.301209 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:45:04.301265 kubelet[2482]: I0129 11:45:04.301243 2482 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:45:04.301265 kubelet[2482]: I0129 11:45:04.301261 2482 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:45:04.301325 kubelet[2482]: E0129 11:45:04.301300 2482 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:45:04.331516 kubelet[2482]: I0129 11:45:04.331480 2482 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:45:04.331516 kubelet[2482]: I0129 11:45:04.331502 2482 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:45:04.331516 kubelet[2482]: I0129 11:45:04.331523 2482 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:45:04.331697 kubelet[2482]: I0129 11:45:04.331658 2482 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:45:04.331697 kubelet[2482]: I0129 11:45:04.331668 2482 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:45:04.331697 kubelet[2482]: I0129 11:45:04.331686 2482 policy_none.go:49] "None policy: Start" Jan 29 11:45:04.332361 kubelet[2482]: I0129 11:45:04.332337 2482 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:45:04.332396 kubelet[2482]: I0129 11:45:04.332368 2482 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:45:04.332606 kubelet[2482]: I0129 11:45:04.332583 2482 state_mem.go:75] "Updated machine memory state" Jan 29 11:45:04.336846 kubelet[2482]: I0129 11:45:04.336807 2482 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:45:04.337040 kubelet[2482]: I0129 11:45:04.337023 2482 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:45:04.337071 kubelet[2482]: I0129 11:45:04.337040 2482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:45:04.337260 kubelet[2482]: I0129 11:45:04.337244 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:45:04.409167 kubelet[2482]: E0129 11:45:04.409103 2482 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:45:04.442832 kubelet[2482]: I0129 11:45:04.442714 2482 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:45:04.453998 kubelet[2482]: I0129 11:45:04.453301 2482 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:45:04.453998 kubelet[2482]: I0129 11:45:04.453391 2482 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:45:04.487736 kubelet[2482]: I0129 11:45:04.487663 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f25de693ce4f337f40180689e5591bf9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f25de693ce4f337f40180689e5591bf9\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:45:04.487736 kubelet[2482]: I0129 11:45:04.487704 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f25de693ce4f337f40180689e5591bf9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f25de693ce4f337f40180689e5591bf9\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:45:04.487736 kubelet[2482]: I0129 11:45:04.487725 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:45:04.487736 kubelet[2482]: I0129 11:45:04.487739 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:45:04.487988 kubelet[2482]: I0129 11:45:04.487753 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:45:04.487988 kubelet[2482]: I0129 11:45:04.487773 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:45:04.487988 kubelet[2482]: I0129 11:45:04.487789 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f25de693ce4f337f40180689e5591bf9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f25de693ce4f337f40180689e5591bf9\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:45:04.487988 kubelet[2482]: I0129 11:45:04.487803 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:45:04.487988 kubelet[2482]: I0129 11:45:04.487817 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:45:04.706870 kubelet[2482]: E0129 11:45:04.706747 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:04.707679 kubelet[2482]: E0129 11:45:04.707642 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:04.709620 kubelet[2482]: E0129 11:45:04.709581 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:05.276674 kubelet[2482]: I0129 11:45:05.276370 2482 apiserver.go:52] "Watching apiserver" Jan 29 11:45:05.286271 kubelet[2482]: I0129 11:45:05.286228 2482 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:45:05.313575 kubelet[2482]: E0129 11:45:05.313530 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:05.314374 kubelet[2482]: E0129 11:45:05.314348 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:05.333938 kubelet[2482]: E0129 11:45:05.333893 2482 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:45:05.334109 kubelet[2482]: E0129 11:45:05.334092 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:05.350273 kubelet[2482]: I0129 11:45:05.350198 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.350168319 podStartE2EDuration="1.350168319s" podCreationTimestamp="2025-01-29 11:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:45:05.343121698 +0000 UTC m=+1.195936517" watchObservedRunningTime="2025-01-29 11:45:05.350168319 +0000 UTC m=+1.202983128" Jan 29 11:45:05.361827 kubelet[2482]: I0129 11:45:05.361117 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.36109472 podStartE2EDuration="1.36109472s" podCreationTimestamp="2025-01-29 11:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:45:05.350379731 +0000 UTC m=+1.203194550" watchObservedRunningTime="2025-01-29 11:45:05.36109472 +0000 UTC m=+1.213909539" Jan 29 11:45:06.315767 kubelet[2482]: E0129 11:45:06.315708 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:07.408420 kubelet[2482]: E0129 11:45:07.408382 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:08.571448 kubelet[2482]: I0129 11:45:08.571395 2482 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:45:08.571906 containerd[1466]: time="2025-01-29T11:45:08.571814648Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:45:08.572460 kubelet[2482]: I0129 11:45:08.572427 2482 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:45:08.791135 sudo[1645]: pam_unix(sudo:session): session closed for user root Jan 29 11:45:08.792986 sshd[1642]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:08.797000 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:52712.service: Deactivated successfully. Jan 29 11:45:08.799131 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:45:08.799311 systemd[1]: session-7.scope: Consumed 4.013s CPU time, 157.6M memory peak, 0B memory swap peak. Jan 29 11:45:08.799710 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:45:08.800523 systemd-logind[1452]: Removed session 7. Jan 29 11:45:09.051330 kubelet[2482]: I0129 11:45:09.051173 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.051125303 podStartE2EDuration="7.051125303s" podCreationTimestamp="2025-01-29 11:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:45:05.361322626 +0000 UTC m=+1.214137435" watchObservedRunningTime="2025-01-29 11:45:09.051125303 +0000 UTC m=+4.903940123" Jan 29 11:45:09.059510 systemd[1]: Created slice kubepods-besteffort-podce7d99c3_2703_447c_a18c_37148cf34256.slice - libcontainer container kubepods-besteffort-podce7d99c3_2703_447c_a18c_37148cf34256.slice. Jan 29 11:45:09.115191 kubelet[2482]: I0129 11:45:09.115143 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce7d99c3-2703-447c-a18c-37148cf34256-kube-proxy\") pod \"kube-proxy-424rf\" (UID: \"ce7d99c3-2703-447c-a18c-37148cf34256\") " pod="kube-system/kube-proxy-424rf" Jan 29 11:45:09.115191 kubelet[2482]: I0129 11:45:09.115176 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce7d99c3-2703-447c-a18c-37148cf34256-xtables-lock\") pod \"kube-proxy-424rf\" (UID: \"ce7d99c3-2703-447c-a18c-37148cf34256\") " pod="kube-system/kube-proxy-424rf" Jan 29 11:45:09.115191 kubelet[2482]: I0129 11:45:09.115198 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce7d99c3-2703-447c-a18c-37148cf34256-lib-modules\") pod \"kube-proxy-424rf\" (UID: \"ce7d99c3-2703-447c-a18c-37148cf34256\") " pod="kube-system/kube-proxy-424rf" Jan 29 11:45:09.115394 kubelet[2482]: I0129 11:45:09.115251 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtt57\" (UniqueName: \"kubernetes.io/projected/ce7d99c3-2703-447c-a18c-37148cf34256-kube-api-access-vtt57\") pod \"kube-proxy-424rf\" (UID: \"ce7d99c3-2703-447c-a18c-37148cf34256\") " pod="kube-system/kube-proxy-424rf" Jan 29 11:45:09.220081 kubelet[2482]: E0129 11:45:09.220030 2482 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 11:45:09.220081 kubelet[2482]: E0129 11:45:09.220063 2482 projected.go:194] Error preparing data for projected volume kube-api-access-vtt57 for pod kube-system/kube-proxy-424rf: configmap "kube-root-ca.crt" not found Jan 29 11:45:09.220275 kubelet[2482]: E0129 11:45:09.220137 2482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce7d99c3-2703-447c-a18c-37148cf34256-kube-api-access-vtt57 podName:ce7d99c3-2703-447c-a18c-37148cf34256 nodeName:}" failed. No retries permitted until 2025-01-29 11:45:09.720105816 +0000 UTC m=+5.572920636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vtt57" (UniqueName: "kubernetes.io/projected/ce7d99c3-2703-447c-a18c-37148cf34256-kube-api-access-vtt57") pod "kube-proxy-424rf" (UID: "ce7d99c3-2703-447c-a18c-37148cf34256") : configmap "kube-root-ca.crt" not found Jan 29 11:45:09.673065 systemd[1]: Created slice kubepods-besteffort-pod75850248_2210_4008_9084_766cebc54ab3.slice - libcontainer container kubepods-besteffort-pod75850248_2210_4008_9084_766cebc54ab3.slice. Jan 29 11:45:09.719418 kubelet[2482]: I0129 11:45:09.719383 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qdh2\" (UniqueName: \"kubernetes.io/projected/75850248-2210-4008-9084-766cebc54ab3-kube-api-access-6qdh2\") pod \"tigera-operator-76c4976dd7-wkf2b\" (UID: \"75850248-2210-4008-9084-766cebc54ab3\") " pod="tigera-operator/tigera-operator-76c4976dd7-wkf2b" Jan 29 11:45:09.719418 kubelet[2482]: I0129 11:45:09.719426 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75850248-2210-4008-9084-766cebc54ab3-var-lib-calico\") pod \"tigera-operator-76c4976dd7-wkf2b\" (UID: \"75850248-2210-4008-9084-766cebc54ab3\") " pod="tigera-operator/tigera-operator-76c4976dd7-wkf2b" Jan 29 11:45:09.971562 kubelet[2482]: E0129 11:45:09.971399 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:09.972089 containerd[1466]: time="2025-01-29T11:45:09.972047173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-424rf,Uid:ce7d99c3-2703-447c-a18c-37148cf34256,Namespace:kube-system,Attempt:0,}" Jan 29 11:45:09.976088 containerd[1466]: time="2025-01-29T11:45:09.976044425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-wkf2b,Uid:75850248-2210-4008-9084-766cebc54ab3,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:45:10.010448 containerd[1466]: time="2025-01-29T11:45:10.010315862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:10.010448 containerd[1466]: time="2025-01-29T11:45:10.010369400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:10.010448 containerd[1466]: time="2025-01-29T11:45:10.010379561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:10.010696 containerd[1466]: time="2025-01-29T11:45:10.010453128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:10.016222 containerd[1466]: time="2025-01-29T11:45:10.015868042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:10.016222 containerd[1466]: time="2025-01-29T11:45:10.015931018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:10.016222 containerd[1466]: time="2025-01-29T11:45:10.015950467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:10.016222 containerd[1466]: time="2025-01-29T11:45:10.016041059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:10.030081 systemd[1]: Started cri-containerd-869d47fd339c90bc050296bc9e76f074c5c7963833b56d800cdfb2099f5889da.scope - libcontainer container 869d47fd339c90bc050296bc9e76f074c5c7963833b56d800cdfb2099f5889da. Jan 29 11:45:10.033158 systemd[1]: Started cri-containerd-1eea1675273d67267bf9aa5740c7fd3434ed1b0ba17ea9f74e3cd0a3e56d8b0b.scope - libcontainer container 1eea1675273d67267bf9aa5740c7fd3434ed1b0ba17ea9f74e3cd0a3e56d8b0b. Jan 29 11:45:10.052662 containerd[1466]: time="2025-01-29T11:45:10.052594187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-424rf,Uid:ce7d99c3-2703-447c-a18c-37148cf34256,Namespace:kube-system,Attempt:0,} returns sandbox id \"869d47fd339c90bc050296bc9e76f074c5c7963833b56d800cdfb2099f5889da\"" Jan 29 11:45:10.053772 kubelet[2482]: E0129 11:45:10.053324 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:10.056043 containerd[1466]: time="2025-01-29T11:45:10.055999904Z" level=info msg="CreateContainer within sandbox \"869d47fd339c90bc050296bc9e76f074c5c7963833b56d800cdfb2099f5889da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:45:10.071573 containerd[1466]: time="2025-01-29T11:45:10.071514861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-wkf2b,Uid:75850248-2210-4008-9084-766cebc54ab3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1eea1675273d67267bf9aa5740c7fd3434ed1b0ba17ea9f74e3cd0a3e56d8b0b\"" Jan 29 11:45:10.073807 containerd[1466]: time="2025-01-29T11:45:10.073776996Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:45:10.082743 containerd[1466]: time="2025-01-29T11:45:10.082703959Z" level=info msg="CreateContainer within sandbox \"869d47fd339c90bc050296bc9e76f074c5c7963833b56d800cdfb2099f5889da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d638b8a462b6e313178db2dff50c55326ffd012a3cc3dc3b33421a6e0eb15b4\"" Jan 29 11:45:10.083154 containerd[1466]: time="2025-01-29T11:45:10.083102078Z" level=info msg="StartContainer for \"0d638b8a462b6e313178db2dff50c55326ffd012a3cc3dc3b33421a6e0eb15b4\"" Jan 29 11:45:10.114044 systemd[1]: Started cri-containerd-0d638b8a462b6e313178db2dff50c55326ffd012a3cc3dc3b33421a6e0eb15b4.scope - libcontainer container 0d638b8a462b6e313178db2dff50c55326ffd012a3cc3dc3b33421a6e0eb15b4. Jan 29 11:45:10.141152 containerd[1466]: time="2025-01-29T11:45:10.141110902Z" level=info msg="StartContainer for \"0d638b8a462b6e313178db2dff50c55326ffd012a3cc3dc3b33421a6e0eb15b4\" returns successfully" Jan 29 11:45:10.322765 kubelet[2482]: E0129 11:45:10.321935 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:11.983017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount937792337.mount: Deactivated successfully. Jan 29 11:45:12.148701 kubelet[2482]: E0129 11:45:12.148660 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:12.160819 kubelet[2482]: I0129 11:45:12.160714 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-424rf" podStartSLOduration=3.160691936 podStartE2EDuration="3.160691936s" podCreationTimestamp="2025-01-29 11:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:45:10.32945762 +0000 UTC m=+6.182272439" watchObservedRunningTime="2025-01-29 11:45:12.160691936 +0000 UTC m=+8.013506755" Jan 29 11:45:12.325785 kubelet[2482]: E0129 11:45:12.325670 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:12.861020 containerd[1466]: time="2025-01-29T11:45:12.860962743Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:12.861712 containerd[1466]: time="2025-01-29T11:45:12.861667797Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:45:12.862908 containerd[1466]: time="2025-01-29T11:45:12.862868897Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:12.865119 containerd[1466]: time="2025-01-29T11:45:12.865068647Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:12.865605 containerd[1466]: time="2025-01-29T11:45:12.865575346Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.791769793s" Jan 29 11:45:12.865635 containerd[1466]: time="2025-01-29T11:45:12.865605396Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:45:12.867675 containerd[1466]: time="2025-01-29T11:45:12.867649797Z" level=info msg="CreateContainer within sandbox \"1eea1675273d67267bf9aa5740c7fd3434ed1b0ba17ea9f74e3cd0a3e56d8b0b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:45:12.880416 containerd[1466]: time="2025-01-29T11:45:12.880385073Z" level=info msg="CreateContainer within sandbox \"1eea1675273d67267bf9aa5740c7fd3434ed1b0ba17ea9f74e3cd0a3e56d8b0b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b436e31c46421ea5b3f5d5d5cc578ca937ae24dc2f56f4e9e83dbc71a3d26fb3\"" Jan 29 11:45:12.880743 containerd[1466]: time="2025-01-29T11:45:12.880717115Z" level=info msg="StartContainer for \"b436e31c46421ea5b3f5d5d5cc578ca937ae24dc2f56f4e9e83dbc71a3d26fb3\"" Jan 29 11:45:12.907040 systemd[1]: Started cri-containerd-b436e31c46421ea5b3f5d5d5cc578ca937ae24dc2f56f4e9e83dbc71a3d26fb3.scope - libcontainer container b436e31c46421ea5b3f5d5d5cc578ca937ae24dc2f56f4e9e83dbc71a3d26fb3. Jan 29 11:45:12.932347 containerd[1466]: time="2025-01-29T11:45:12.932293546Z" level=info msg="StartContainer for \"b436e31c46421ea5b3f5d5d5cc578ca937ae24dc2f56f4e9e83dbc71a3d26fb3\" returns successfully" Jan 29 11:45:13.337678 kubelet[2482]: I0129 11:45:13.337587 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-wkf2b" podStartSLOduration=1.5447270899999999 podStartE2EDuration="4.337561738s" podCreationTimestamp="2025-01-29 11:45:09 +0000 UTC" firstStartedPulling="2025-01-29 11:45:10.073406263 +0000 UTC m=+5.926221082" lastFinishedPulling="2025-01-29 11:45:12.866240921 +0000 UTC m=+8.719055730" observedRunningTime="2025-01-29 11:45:13.337298114 +0000 UTC m=+9.190112933" watchObservedRunningTime="2025-01-29 11:45:13.337561738 +0000 UTC m=+9.190376577" Jan 29 11:45:13.820951 kubelet[2482]: E0129 11:45:13.820893 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:14.330509 kubelet[2482]: E0129 11:45:14.330457 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:15.904113 systemd[1]: Created slice kubepods-besteffort-pod8dbd653f_a89f_4380_9ece_e8d9decadba5.slice - libcontainer container kubepods-besteffort-pod8dbd653f_a89f_4380_9ece_e8d9decadba5.slice. Jan 29 11:45:15.922614 systemd[1]: Created slice kubepods-besteffort-pod301f3c4a_f711_47c0_8088_0cdc36008f44.slice - libcontainer container kubepods-besteffort-pod301f3c4a_f711_47c0_8088_0cdc36008f44.slice. Jan 29 11:45:15.955783 kubelet[2482]: I0129 11:45:15.955693 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/301f3c4a-f711-47c0-8088-0cdc36008f44-node-certs\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.955783 kubelet[2482]: I0129 11:45:15.955744 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-cni-net-dir\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956316 kubelet[2482]: I0129 11:45:15.955810 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-policysync\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956316 kubelet[2482]: I0129 11:45:15.955837 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/301f3c4a-f711-47c0-8088-0cdc36008f44-tigera-ca-bundle\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956316 kubelet[2482]: I0129 11:45:15.955857 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-var-run-calico\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956316 kubelet[2482]: I0129 11:45:15.955885 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dbd653f-a89f-4380-9ece-e8d9decadba5-tigera-ca-bundle\") pod \"calico-typha-6c94b8cb6d-pbfp9\" (UID: \"8dbd653f-a89f-4380-9ece-e8d9decadba5\") " pod="calico-system/calico-typha-6c94b8cb6d-pbfp9" Jan 29 11:45:15.956316 kubelet[2482]: I0129 11:45:15.955935 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94qt\" (UniqueName: \"kubernetes.io/projected/8dbd653f-a89f-4380-9ece-e8d9decadba5-kube-api-access-w94qt\") pod \"calico-typha-6c94b8cb6d-pbfp9\" (UID: \"8dbd653f-a89f-4380-9ece-e8d9decadba5\") " pod="calico-system/calico-typha-6c94b8cb6d-pbfp9" Jan 29 11:45:15.956437 kubelet[2482]: I0129 11:45:15.955958 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-xtables-lock\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956437 kubelet[2482]: I0129 11:45:15.956002 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-lib-modules\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956437 kubelet[2482]: I0129 11:45:15.956032 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-cni-bin-dir\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956437 kubelet[2482]: I0129 11:45:15.956061 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-flexvol-driver-host\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956437 kubelet[2482]: I0129 11:45:15.956097 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-var-lib-calico\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956551 kubelet[2482]: I0129 11:45:15.956125 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/301f3c4a-f711-47c0-8088-0cdc36008f44-cni-log-dir\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956551 kubelet[2482]: I0129 11:45:15.956187 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f482\" (UniqueName: \"kubernetes.io/projected/301f3c4a-f711-47c0-8088-0cdc36008f44-kube-api-access-9f482\") pod \"calico-node-25jw6\" (UID: \"301f3c4a-f711-47c0-8088-0cdc36008f44\") " pod="calico-system/calico-node-25jw6" Jan 29 11:45:15.956551 kubelet[2482]: I0129 11:45:15.956212 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8dbd653f-a89f-4380-9ece-e8d9decadba5-typha-certs\") pod \"calico-typha-6c94b8cb6d-pbfp9\" (UID: \"8dbd653f-a89f-4380-9ece-e8d9decadba5\") " pod="calico-system/calico-typha-6c94b8cb6d-pbfp9" Jan 29 11:45:16.033210 kubelet[2482]: E0129 11:45:16.033148 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:16.056933 kubelet[2482]: I0129 11:45:16.056890 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f43cd9c6-970c-4688-9f00-2800e91cf652-kubelet-dir\") pod \"csi-node-driver-sllk6\" (UID: \"f43cd9c6-970c-4688-9f00-2800e91cf652\") " pod="calico-system/csi-node-driver-sllk6" Jan 29 11:45:16.057165 kubelet[2482]: I0129 11:45:16.057142 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7pv\" (UniqueName: \"kubernetes.io/projected/f43cd9c6-970c-4688-9f00-2800e91cf652-kube-api-access-kx7pv\") pod \"csi-node-driver-sllk6\" (UID: \"f43cd9c6-970c-4688-9f00-2800e91cf652\") " pod="calico-system/csi-node-driver-sllk6" Jan 29 11:45:16.057778 kubelet[2482]: I0129 11:45:16.057278 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f43cd9c6-970c-4688-9f00-2800e91cf652-socket-dir\") pod \"csi-node-driver-sllk6\" (UID: \"f43cd9c6-970c-4688-9f00-2800e91cf652\") " pod="calico-system/csi-node-driver-sllk6" Jan 29 11:45:16.057778 kubelet[2482]: I0129 11:45:16.057307 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f43cd9c6-970c-4688-9f00-2800e91cf652-registration-dir\") pod \"csi-node-driver-sllk6\" (UID: \"f43cd9c6-970c-4688-9f00-2800e91cf652\") " pod="calico-system/csi-node-driver-sllk6" Jan 29 11:45:16.057778 kubelet[2482]: I0129 11:45:16.057335 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f43cd9c6-970c-4688-9f00-2800e91cf652-varrun\") pod \"csi-node-driver-sllk6\" (UID: \"f43cd9c6-970c-4688-9f00-2800e91cf652\") " pod="calico-system/csi-node-driver-sllk6" Jan 29 11:45:16.063794 kubelet[2482]: E0129 11:45:16.060938 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.063794 kubelet[2482]: W0129 11:45:16.060977 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.063794 kubelet[2482]: E0129 11:45:16.061022 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.063794 kubelet[2482]: E0129 11:45:16.061328 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.063794 kubelet[2482]: W0129 11:45:16.061338 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.063794 kubelet[2482]: E0129 11:45:16.061350 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.063794 kubelet[2482]: E0129 11:45:16.061618 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.063794 kubelet[2482]: W0129 11:45:16.061640 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.063794 kubelet[2482]: E0129 11:45:16.061694 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.063794 kubelet[2482]: E0129 11:45:16.061960 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064161 kubelet[2482]: W0129 11:45:16.061972 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064161 kubelet[2482]: E0129 11:45:16.062056 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.064161 kubelet[2482]: E0129 11:45:16.062202 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064161 kubelet[2482]: W0129 11:45:16.062211 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064161 kubelet[2482]: E0129 11:45:16.062298 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.064161 kubelet[2482]: E0129 11:45:16.062432 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064161 kubelet[2482]: W0129 11:45:16.062441 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064161 kubelet[2482]: E0129 11:45:16.062527 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.064161 kubelet[2482]: E0129 11:45:16.062674 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064161 kubelet[2482]: W0129 11:45:16.062686 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064499 kubelet[2482]: E0129 11:45:16.062736 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.064499 kubelet[2482]: E0129 11:45:16.062928 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064499 kubelet[2482]: W0129 11:45:16.062937 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064499 kubelet[2482]: E0129 11:45:16.063016 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.064499 kubelet[2482]: E0129 11:45:16.063707 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064499 kubelet[2482]: W0129 11:45:16.063716 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064499 kubelet[2482]: E0129 11:45:16.063951 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064499 kubelet[2482]: W0129 11:45:16.063961 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064499 kubelet[2482]: E0129 11:45:16.064142 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064499 kubelet[2482]: W0129 11:45:16.064151 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064499 kubelet[2482]: E0129 11:45:16.064332 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064906 kubelet[2482]: W0129 11:45:16.064341 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064906 kubelet[2482]: E0129 11:45:16.064525 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064906 kubelet[2482]: W0129 11:45:16.064535 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.064906 kubelet[2482]: E0129 11:45:16.064739 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.064906 kubelet[2482]: W0129 11:45:16.064748 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.065088 kubelet[2482]: E0129 11:45:16.065046 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.065088 kubelet[2482]: W0129 11:45:16.065057 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.066043 kubelet[2482]: E0129 11:45:16.065195 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066043 kubelet[2482]: E0129 11:45:16.065239 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066043 kubelet[2482]: E0129 11:45:16.065251 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066043 kubelet[2482]: E0129 11:45:16.065270 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066043 kubelet[2482]: E0129 11:45:16.065290 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066043 kubelet[2482]: E0129 11:45:16.065302 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066043 kubelet[2482]: E0129 11:45:16.065317 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066043 kubelet[2482]: E0129 11:45:16.066000 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.066043 kubelet[2482]: W0129 11:45:16.066013 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.066362 kubelet[2482]: E0129 11:45:16.066061 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066362 kubelet[2482]: E0129 11:45:16.066250 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.066362 kubelet[2482]: W0129 11:45:16.066260 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.066455 kubelet[2482]: E0129 11:45:16.066404 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.066610 kubelet[2482]: E0129 11:45:16.066581 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.066610 kubelet[2482]: W0129 11:45:16.066602 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.067192 kubelet[2482]: E0129 11:45:16.067121 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.069024 kubelet[2482]: E0129 11:45:16.068990 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.069024 kubelet[2482]: W0129 11:45:16.069015 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.069185 kubelet[2482]: E0129 11:45:16.069035 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.069314 kubelet[2482]: E0129 11:45:16.069270 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.069314 kubelet[2482]: W0129 11:45:16.069290 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.069314 kubelet[2482]: E0129 11:45:16.069309 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.069624 kubelet[2482]: E0129 11:45:16.069589 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.069624 kubelet[2482]: W0129 11:45:16.069611 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.069727 kubelet[2482]: E0129 11:45:16.069698 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.069881 kubelet[2482]: E0129 11:45:16.069839 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.069881 kubelet[2482]: W0129 11:45:16.069858 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.070163 kubelet[2482]: E0129 11:45:16.070003 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.070221 kubelet[2482]: E0129 11:45:16.070213 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.070259 kubelet[2482]: W0129 11:45:16.070224 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.070670 kubelet[2482]: E0129 11:45:16.070313 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.070670 kubelet[2482]: E0129 11:45:16.070498 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.070670 kubelet[2482]: W0129 11:45:16.070509 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.070670 kubelet[2482]: E0129 11:45:16.070570 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.071336 kubelet[2482]: E0129 11:45:16.071310 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.071336 kubelet[2482]: W0129 11:45:16.071329 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.071873 kubelet[2482]: E0129 11:45:16.071796 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.074138 kubelet[2482]: E0129 11:45:16.074114 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.074138 kubelet[2482]: W0129 11:45:16.074133 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.074355 kubelet[2482]: E0129 11:45:16.074247 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.074421 kubelet[2482]: E0129 11:45:16.074391 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.074421 kubelet[2482]: W0129 11:45:16.074407 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.074536 kubelet[2482]: E0129 11:45:16.074514 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.075963 kubelet[2482]: E0129 11:45:16.075736 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.075963 kubelet[2482]: W0129 11:45:16.075764 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.075963 kubelet[2482]: E0129 11:45:16.075867 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.076083 kubelet[2482]: E0129 11:45:16.076056 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.076083 kubelet[2482]: W0129 11:45:16.076075 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.077009 kubelet[2482]: E0129 11:45:16.076336 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.077009 kubelet[2482]: E0129 11:45:16.076867 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.077009 kubelet[2482]: W0129 11:45:16.076878 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.077009 kubelet[2482]: E0129 11:45:16.076938 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.077858 kubelet[2482]: E0129 11:45:16.077824 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.077858 kubelet[2482]: W0129 11:45:16.077841 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.078311 kubelet[2482]: E0129 11:45:16.078231 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.078489 kubelet[2482]: E0129 11:45:16.078449 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.078721 kubelet[2482]: W0129 11:45:16.078584 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.078888 kubelet[2482]: E0129 11:45:16.078830 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.079452 kubelet[2482]: E0129 11:45:16.079087 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.079498 kubelet[2482]: W0129 11:45:16.079445 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.079557 kubelet[2482]: E0129 11:45:16.079523 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.080021 kubelet[2482]: E0129 11:45:16.079826 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.080021 kubelet[2482]: W0129 11:45:16.079846 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.080021 kubelet[2482]: E0129 11:45:16.079885 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.080651 kubelet[2482]: E0129 11:45:16.080610 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.080651 kubelet[2482]: W0129 11:45:16.080641 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.080771 kubelet[2482]: E0129 11:45:16.080690 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.081014 kubelet[2482]: E0129 11:45:16.080985 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.081014 kubelet[2482]: W0129 11:45:16.081003 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.081106 kubelet[2482]: E0129 11:45:16.081057 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.081348 kubelet[2482]: E0129 11:45:16.081329 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.081348 kubelet[2482]: W0129 11:45:16.081343 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.081429 kubelet[2482]: E0129 11:45:16.081387 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.081630 kubelet[2482]: E0129 11:45:16.081611 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.081630 kubelet[2482]: W0129 11:45:16.081626 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.081700 kubelet[2482]: E0129 11:45:16.081665 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.081947 kubelet[2482]: E0129 11:45:16.081927 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.081947 kubelet[2482]: W0129 11:45:16.081940 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.082482 kubelet[2482]: E0129 11:45:16.082458 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.082532 kubelet[2482]: W0129 11:45:16.082509 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.083013 kubelet[2482]: E0129 11:45:16.082608 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.083013 kubelet[2482]: E0129 11:45:16.082625 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.083434 kubelet[2482]: E0129 11:45:16.083391 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.083645 kubelet[2482]: W0129 11:45:16.083541 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.083645 kubelet[2482]: E0129 11:45:16.083598 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.084047 kubelet[2482]: E0129 11:45:16.084014 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.084047 kubelet[2482]: W0129 11:45:16.084027 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.084047 kubelet[2482]: E0129 11:45:16.084046 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.085439 kubelet[2482]: E0129 11:45:16.085410 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.085439 kubelet[2482]: W0129 11:45:16.085430 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.085545 kubelet[2482]: E0129 11:45:16.085451 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.085763 kubelet[2482]: E0129 11:45:16.085727 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.086026 kubelet[2482]: W0129 11:45:16.085807 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.086225 kubelet[2482]: E0129 11:45:16.086098 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.091495 kubelet[2482]: E0129 11:45:16.091387 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.091495 kubelet[2482]: W0129 11:45:16.091413 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.091495 kubelet[2482]: E0129 11:45:16.091431 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.158266 kubelet[2482]: E0129 11:45:16.158102 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.158266 kubelet[2482]: W0129 11:45:16.158124 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.158266 kubelet[2482]: E0129 11:45:16.158146 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.158525 kubelet[2482]: E0129 11:45:16.158436 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.158525 kubelet[2482]: W0129 11:45:16.158461 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.158525 kubelet[2482]: E0129 11:45:16.158490 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.158849 kubelet[2482]: E0129 11:45:16.158829 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.158849 kubelet[2482]: W0129 11:45:16.158840 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.159122 kubelet[2482]: E0129 11:45:16.158856 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.159320 kubelet[2482]: E0129 11:45:16.159151 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.159320 kubelet[2482]: W0129 11:45:16.159160 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.159320 kubelet[2482]: E0129 11:45:16.159255 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.159540 kubelet[2482]: E0129 11:45:16.159503 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.159540 kubelet[2482]: W0129 11:45:16.159523 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.159622 kubelet[2482]: E0129 11:45:16.159563 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.159750 kubelet[2482]: E0129 11:45:16.159703 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.159750 kubelet[2482]: W0129 11:45:16.159718 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.160072 kubelet[2482]: E0129 11:45:16.159890 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.160072 kubelet[2482]: W0129 11:45:16.159899 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.160072 kubelet[2482]: E0129 11:45:16.159909 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.160072 kubelet[2482]: E0129 11:45:16.159886 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.160256 kubelet[2482]: E0129 11:45:16.160148 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.160256 kubelet[2482]: W0129 11:45:16.160156 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.160256 kubelet[2482]: E0129 11:45:16.160165 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.160512 kubelet[2482]: E0129 11:45:16.160481 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.160558 kubelet[2482]: W0129 11:45:16.160512 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.160684 kubelet[2482]: E0129 11:45:16.160611 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.161025 kubelet[2482]: E0129 11:45:16.161006 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.161025 kubelet[2482]: W0129 11:45:16.161020 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.161100 kubelet[2482]: E0129 11:45:16.161063 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.161379 kubelet[2482]: E0129 11:45:16.161360 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.161379 kubelet[2482]: W0129 11:45:16.161372 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.161528 kubelet[2482]: E0129 11:45:16.161472 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.161640 kubelet[2482]: E0129 11:45:16.161621 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.161640 kubelet[2482]: W0129 11:45:16.161634 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.161768 kubelet[2482]: E0129 11:45:16.161732 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.161868 kubelet[2482]: E0129 11:45:16.161858 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.161868 kubelet[2482]: W0129 11:45:16.161867 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.161993 kubelet[2482]: E0129 11:45:16.161908 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.162160 kubelet[2482]: E0129 11:45:16.162142 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.162160 kubelet[2482]: W0129 11:45:16.162154 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.162293 kubelet[2482]: E0129 11:45:16.162251 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.162518 kubelet[2482]: E0129 11:45:16.162494 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.162518 kubelet[2482]: W0129 11:45:16.162514 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.162633 kubelet[2482]: E0129 11:45:16.162596 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.163295 kubelet[2482]: E0129 11:45:16.163193 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.163295 kubelet[2482]: W0129 11:45:16.163214 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.163471 kubelet[2482]: E0129 11:45:16.163351 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.163667 kubelet[2482]: E0129 11:45:16.163636 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.163667 kubelet[2482]: W0129 11:45:16.163653 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.163785 kubelet[2482]: E0129 11:45:16.163734 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.163969 kubelet[2482]: E0129 11:45:16.163894 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.163969 kubelet[2482]: W0129 11:45:16.163909 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.164064 kubelet[2482]: E0129 11:45:16.164040 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.164288 kubelet[2482]: E0129 11:45:16.164182 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.164288 kubelet[2482]: W0129 11:45:16.164196 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.164288 kubelet[2482]: E0129 11:45:16.164235 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.164498 kubelet[2482]: E0129 11:45:16.164477 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.164528 kubelet[2482]: W0129 11:45:16.164496 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.164528 kubelet[2482]: E0129 11:45:16.164521 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.164845 kubelet[2482]: E0129 11:45:16.164824 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.164908 kubelet[2482]: W0129 11:45:16.164855 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.164908 kubelet[2482]: E0129 11:45:16.164877 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.165208 kubelet[2482]: E0129 11:45:16.165190 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.165208 kubelet[2482]: W0129 11:45:16.165206 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.165265 kubelet[2482]: E0129 11:45:16.165224 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.165615 kubelet[2482]: E0129 11:45:16.165596 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.165615 kubelet[2482]: W0129 11:45:16.165612 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.165719 kubelet[2482]: E0129 11:45:16.165631 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.165978 kubelet[2482]: E0129 11:45:16.165956 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.165978 kubelet[2482]: W0129 11:45:16.165970 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.166066 kubelet[2482]: E0129 11:45:16.165990 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.166312 kubelet[2482]: E0129 11:45:16.166295 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.166312 kubelet[2482]: W0129 11:45:16.166309 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.166396 kubelet[2482]: E0129 11:45:16.166322 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.173210 kubelet[2482]: E0129 11:45:16.173183 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:16.173210 kubelet[2482]: W0129 11:45:16.173206 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:16.173292 kubelet[2482]: E0129 11:45:16.173224 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:16.207622 kubelet[2482]: E0129 11:45:16.207571 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:16.208253 containerd[1466]: time="2025-01-29T11:45:16.208209064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c94b8cb6d-pbfp9,Uid:8dbd653f-a89f-4380-9ece-e8d9decadba5,Namespace:calico-system,Attempt:0,}" Jan 29 11:45:16.226357 kubelet[2482]: E0129 11:45:16.226326 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:16.227013 containerd[1466]: time="2025-01-29T11:45:16.226938522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-25jw6,Uid:301f3c4a-f711-47c0-8088-0cdc36008f44,Namespace:calico-system,Attempt:0,}" Jan 29 11:45:16.390792 containerd[1466]: time="2025-01-29T11:45:16.390650550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:16.390792 containerd[1466]: time="2025-01-29T11:45:16.390703925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:16.390792 containerd[1466]: time="2025-01-29T11:45:16.390715598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:16.391031 containerd[1466]: time="2025-01-29T11:45:16.390795446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:16.394625 containerd[1466]: time="2025-01-29T11:45:16.394330577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:16.394625 containerd[1466]: time="2025-01-29T11:45:16.394518988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:16.394625 containerd[1466]: time="2025-01-29T11:45:16.394540961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:16.395025 containerd[1466]: time="2025-01-29T11:45:16.394719231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:16.414111 systemd[1]: Started cri-containerd-75bde2bbc87d9a8a285a4932efe7058655e1269a648eaa69d68eca58439698d8.scope - libcontainer container 75bde2bbc87d9a8a285a4932efe7058655e1269a648eaa69d68eca58439698d8. Jan 29 11:45:16.417004 systemd[1]: Started cri-containerd-f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8.scope - libcontainer container f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8. Jan 29 11:45:16.443027 containerd[1466]: time="2025-01-29T11:45:16.442986791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-25jw6,Uid:301f3c4a-f711-47c0-8088-0cdc36008f44,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8\"" Jan 29 11:45:16.443993 kubelet[2482]: E0129 11:45:16.443900 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:16.444987 containerd[1466]: time="2025-01-29T11:45:16.444945056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:45:16.459962 containerd[1466]: time="2025-01-29T11:45:16.459894662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c94b8cb6d-pbfp9,Uid:8dbd653f-a89f-4380-9ece-e8d9decadba5,Namespace:calico-system,Attempt:0,} returns sandbox id \"75bde2bbc87d9a8a285a4932efe7058655e1269a648eaa69d68eca58439698d8\"" Jan 29 11:45:16.460827 kubelet[2482]: E0129 11:45:16.460792 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:17.412827 kubelet[2482]: E0129 11:45:17.412758 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:17.462990 update_engine[1455]: I20250129 11:45:17.462888 1455 update_attempter.cc:509] Updating boot flags... Jan 29 11:45:17.464208 kubelet[2482]: E0129 11:45:17.463294 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.464208 kubelet[2482]: W0129 11:45:17.463320 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.464208 kubelet[2482]: E0129 11:45:17.463343 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.464208 kubelet[2482]: E0129 11:45:17.463557 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.464208 kubelet[2482]: W0129 11:45:17.463565 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.464208 kubelet[2482]: E0129 11:45:17.463573 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.464208 kubelet[2482]: E0129 11:45:17.463793 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.464208 kubelet[2482]: W0129 11:45:17.463828 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.464208 kubelet[2482]: E0129 11:45:17.463840 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.464208 kubelet[2482]: E0129 11:45:17.464123 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.464529 kubelet[2482]: W0129 11:45:17.464131 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.464529 kubelet[2482]: E0129 11:45:17.464141 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.464529 kubelet[2482]: E0129 11:45:17.464409 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.464529 kubelet[2482]: W0129 11:45:17.464418 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.464529 kubelet[2482]: E0129 11:45:17.464427 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.465123 kubelet[2482]: E0129 11:45:17.465098 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.465182 kubelet[2482]: W0129 11:45:17.465123 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.465182 kubelet[2482]: E0129 11:45:17.465153 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.465456 kubelet[2482]: E0129 11:45:17.465434 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.465456 kubelet[2482]: W0129 11:45:17.465450 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.465599 kubelet[2482]: E0129 11:45:17.465462 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.465706 kubelet[2482]: E0129 11:45:17.465681 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.465706 kubelet[2482]: W0129 11:45:17.465695 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.465706 kubelet[2482]: E0129 11:45:17.465706 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.465977 kubelet[2482]: E0129 11:45:17.465959 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.465977 kubelet[2482]: W0129 11:45:17.465974 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.466036 kubelet[2482]: E0129 11:45:17.465984 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.466203 kubelet[2482]: E0129 11:45:17.466187 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.466203 kubelet[2482]: W0129 11:45:17.466201 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.466267 kubelet[2482]: E0129 11:45:17.466213 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.466453 kubelet[2482]: E0129 11:45:17.466437 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.466453 kubelet[2482]: W0129 11:45:17.466451 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.466516 kubelet[2482]: E0129 11:45:17.466461 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.466673 kubelet[2482]: E0129 11:45:17.466658 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.466673 kubelet[2482]: W0129 11:45:17.466671 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.466736 kubelet[2482]: E0129 11:45:17.466681 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.466940 kubelet[2482]: E0129 11:45:17.466908 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.466940 kubelet[2482]: W0129 11:45:17.466938 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.467004 kubelet[2482]: E0129 11:45:17.466949 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.467228 kubelet[2482]: E0129 11:45:17.467211 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.467228 kubelet[2482]: W0129 11:45:17.467225 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.467292 kubelet[2482]: E0129 11:45:17.467236 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.467475 kubelet[2482]: E0129 11:45:17.467458 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:45:17.467475 kubelet[2482]: W0129 11:45:17.467472 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:45:17.467525 kubelet[2482]: E0129 11:45:17.467483 2482 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:45:17.492416 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3059) Jan 29 11:45:17.535810 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3059) Jan 29 11:45:17.565946 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3059) Jan 29 11:45:17.933046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599123358.mount: Deactivated successfully. Jan 29 11:45:18.301757 kubelet[2482]: E0129 11:45:18.301605 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:18.379714 containerd[1466]: time="2025-01-29T11:45:18.379672016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:18.380485 containerd[1466]: time="2025-01-29T11:45:18.380429120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:45:18.381761 containerd[1466]: time="2025-01-29T11:45:18.381701404Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:18.383580 containerd[1466]: time="2025-01-29T11:45:18.383542391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:18.384167 containerd[1466]: time="2025-01-29T11:45:18.384136166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.939160359s" Jan 29 11:45:18.384210 containerd[1466]: time="2025-01-29T11:45:18.384168469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:45:18.385146 containerd[1466]: time="2025-01-29T11:45:18.385124903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:45:18.386160 containerd[1466]: time="2025-01-29T11:45:18.386109253Z" level=info msg="CreateContainer within sandbox \"f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:45:18.403606 containerd[1466]: time="2025-01-29T11:45:18.403564233Z" level=info msg="CreateContainer within sandbox \"f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803\"" Jan 29 11:45:18.404206 containerd[1466]: time="2025-01-29T11:45:18.404076356Z" level=info msg="StartContainer for \"dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803\"" Jan 29 11:45:18.435063 systemd[1]: Started cri-containerd-dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803.scope - libcontainer container dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803. Jan 29 11:45:18.462505 containerd[1466]: time="2025-01-29T11:45:18.462459974Z" level=info msg="StartContainer for \"dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803\" returns successfully" Jan 29 11:45:18.475192 systemd[1]: cri-containerd-dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803.scope: Deactivated successfully. Jan 29 11:45:18.496478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803-rootfs.mount: Deactivated successfully. Jan 29 11:45:18.566433 containerd[1466]: time="2025-01-29T11:45:18.566262458Z" level=info msg="shim disconnected" id=dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803 namespace=k8s.io Jan 29 11:45:18.566433 containerd[1466]: time="2025-01-29T11:45:18.566327616Z" level=warning msg="cleaning up after shim disconnected" id=dd158f0112a12989d65e93420ed09007d52a20f81a2418346d03656b93eaf803 namespace=k8s.io Jan 29 11:45:18.566433 containerd[1466]: time="2025-01-29T11:45:18.566338086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:45:19.341089 kubelet[2482]: E0129 11:45:19.341008 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:20.302679 kubelet[2482]: E0129 11:45:20.302595 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:21.324222 containerd[1466]: time="2025-01-29T11:45:21.324147479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:21.324840 containerd[1466]: time="2025-01-29T11:45:21.324795602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 11:45:21.326151 containerd[1466]: time="2025-01-29T11:45:21.326125784Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:21.328313 containerd[1466]: time="2025-01-29T11:45:21.328282135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:21.328798 containerd[1466]: time="2025-01-29T11:45:21.328766188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.943615424s" Jan 29 11:45:21.328837 containerd[1466]: time="2025-01-29T11:45:21.328798932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:45:21.330381 containerd[1466]: time="2025-01-29T11:45:21.330242715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:45:21.338629 containerd[1466]: time="2025-01-29T11:45:21.337613475Z" level=info msg="CreateContainer within sandbox \"75bde2bbc87d9a8a285a4932efe7058655e1269a648eaa69d68eca58439698d8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:45:21.351448 containerd[1466]: time="2025-01-29T11:45:21.351403884Z" level=info msg="CreateContainer within sandbox \"75bde2bbc87d9a8a285a4932efe7058655e1269a648eaa69d68eca58439698d8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5e5992b41494eca045185250bdd49282c0456b81be652bba2ce84e63796bacef\"" Jan 29 11:45:21.351860 containerd[1466]: time="2025-01-29T11:45:21.351821367Z" level=info msg="StartContainer for \"5e5992b41494eca045185250bdd49282c0456b81be652bba2ce84e63796bacef\"" Jan 29 11:45:21.381046 systemd[1]: Started cri-containerd-5e5992b41494eca045185250bdd49282c0456b81be652bba2ce84e63796bacef.scope - libcontainer container 5e5992b41494eca045185250bdd49282c0456b81be652bba2ce84e63796bacef. Jan 29 11:45:21.421510 containerd[1466]: time="2025-01-29T11:45:21.421477851Z" level=info msg="StartContainer for \"5e5992b41494eca045185250bdd49282c0456b81be652bba2ce84e63796bacef\" returns successfully" Jan 29 11:45:22.301650 kubelet[2482]: E0129 11:45:22.301577 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:22.349049 kubelet[2482]: E0129 11:45:22.349020 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:22.358852 kubelet[2482]: I0129 11:45:22.358712 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c94b8cb6d-pbfp9" podStartSLOduration=2.494360026 podStartE2EDuration="7.358695321s" podCreationTimestamp="2025-01-29 11:45:15 +0000 UTC" firstStartedPulling="2025-01-29 11:45:16.465349713 +0000 UTC m=+12.318164532" lastFinishedPulling="2025-01-29 11:45:21.329685008 +0000 UTC m=+17.182499827" observedRunningTime="2025-01-29 11:45:22.358019597 +0000 UTC m=+18.210834416" watchObservedRunningTime="2025-01-29 11:45:22.358695321 +0000 UTC m=+18.211510141" Jan 29 11:45:23.350095 kubelet[2482]: I0129 11:45:23.350040 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:45:23.350538 kubelet[2482]: E0129 11:45:23.350516 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:24.303168 kubelet[2482]: E0129 11:45:24.303109 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:26.301606 kubelet[2482]: E0129 11:45:26.301550 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:27.755171 containerd[1466]: time="2025-01-29T11:45:27.755118259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:27.755789 containerd[1466]: time="2025-01-29T11:45:27.755754428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:45:27.756793 containerd[1466]: time="2025-01-29T11:45:27.756760660Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:27.758932 containerd[1466]: time="2025-01-29T11:45:27.758878921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:27.759488 containerd[1466]: time="2025-01-29T11:45:27.759458960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.42918731s" Jan 29 11:45:27.759520 containerd[1466]: time="2025-01-29T11:45:27.759488408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:45:27.761258 containerd[1466]: time="2025-01-29T11:45:27.761224770Z" level=info msg="CreateContainer within sandbox \"f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:45:27.777276 containerd[1466]: time="2025-01-29T11:45:27.777241817Z" level=info msg="CreateContainer within sandbox \"f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b\"" Jan 29 11:45:27.777686 containerd[1466]: time="2025-01-29T11:45:27.777651489Z" level=info msg="StartContainer for \"c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b\"" Jan 29 11:45:27.810039 systemd[1]: Started cri-containerd-c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b.scope - libcontainer container c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b. Jan 29 11:45:27.839676 containerd[1466]: time="2025-01-29T11:45:27.839631947Z" level=info msg="StartContainer for \"c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b\" returns successfully" Jan 29 11:45:28.302364 kubelet[2482]: E0129 11:45:28.302295 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:28.359724 kubelet[2482]: E0129 11:45:28.359686 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:28.711697 systemd[1]: cri-containerd-c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b.scope: Deactivated successfully. Jan 29 11:45:28.733228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b-rootfs.mount: Deactivated successfully. Jan 29 11:45:28.742234 kubelet[2482]: I0129 11:45:28.742192 2482 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:45:28.842341 containerd[1466]: time="2025-01-29T11:45:28.842238259Z" level=info msg="shim disconnected" id=c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b namespace=k8s.io Jan 29 11:45:28.842341 containerd[1466]: time="2025-01-29T11:45:28.842295940Z" level=warning msg="cleaning up after shim disconnected" id=c0e91cd8dec7f216d0353aaef121487c61a7684584505d9f08209f417ced152b namespace=k8s.io Jan 29 11:45:28.842341 containerd[1466]: time="2025-01-29T11:45:28.842304276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:45:28.847889 systemd[1]: Created slice kubepods-besteffort-pod341ae40d_b2cd_48be_89df_3aae61760d67.slice - libcontainer container kubepods-besteffort-pod341ae40d_b2cd_48be_89df_3aae61760d67.slice. Jan 29 11:45:28.857631 systemd[1]: Created slice kubepods-burstable-pod8232d851_127c_45d7_b458_e6bfcdd82418.slice - libcontainer container kubepods-burstable-pod8232d851_127c_45d7_b458_e6bfcdd82418.slice. Jan 29 11:45:28.863331 systemd[1]: Created slice kubepods-burstable-pod9e8e6d53_1dde_47b7_be75_dd444d38411e.slice - libcontainer container kubepods-burstable-pod9e8e6d53_1dde_47b7_be75_dd444d38411e.slice. Jan 29 11:45:28.869059 systemd[1]: Created slice kubepods-besteffort-pode97ff18c_9ca5_474c_b893_4e67487f341c.slice - libcontainer container kubepods-besteffort-pode97ff18c_9ca5_474c_b893_4e67487f341c.slice. Jan 29 11:45:28.875449 systemd[1]: Created slice kubepods-besteffort-pod3e8ec329_4c59_4738_98e7_f420cb51aefa.slice - libcontainer container kubepods-besteffort-pod3e8ec329_4c59_4738_98e7_f420cb51aefa.slice. Jan 29 11:45:28.948301 kubelet[2482]: I0129 11:45:28.948237 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk57g\" (UniqueName: \"kubernetes.io/projected/3e8ec329-4c59-4738-98e7-f420cb51aefa-kube-api-access-sk57g\") pod \"calico-kube-controllers-749bdc5899-6mcr2\" (UID: \"3e8ec329-4c59-4738-98e7-f420cb51aefa\") " pod="calico-system/calico-kube-controllers-749bdc5899-6mcr2" Jan 29 11:45:28.948301 kubelet[2482]: I0129 11:45:28.948300 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e97ff18c-9ca5-474c-b893-4e67487f341c-calico-apiserver-certs\") pod \"calico-apiserver-69f5d4f59b-9dw6n\" (UID: \"e97ff18c-9ca5-474c-b893-4e67487f341c\") " pod="calico-apiserver/calico-apiserver-69f5d4f59b-9dw6n" Jan 29 11:45:28.948439 kubelet[2482]: I0129 11:45:28.948360 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e8ec329-4c59-4738-98e7-f420cb51aefa-tigera-ca-bundle\") pod \"calico-kube-controllers-749bdc5899-6mcr2\" (UID: \"3e8ec329-4c59-4738-98e7-f420cb51aefa\") " pod="calico-system/calico-kube-controllers-749bdc5899-6mcr2" Jan 29 11:45:28.948439 kubelet[2482]: I0129 11:45:28.948412 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wjt\" (UniqueName: \"kubernetes.io/projected/9e8e6d53-1dde-47b7-be75-dd444d38411e-kube-api-access-j8wjt\") pod \"coredns-6f6b679f8f-8dvxp\" (UID: \"9e8e6d53-1dde-47b7-be75-dd444d38411e\") " pod="kube-system/coredns-6f6b679f8f-8dvxp" Jan 29 11:45:28.948439 kubelet[2482]: I0129 11:45:28.948429 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4fnz\" (UniqueName: \"kubernetes.io/projected/e97ff18c-9ca5-474c-b893-4e67487f341c-kube-api-access-p4fnz\") pod \"calico-apiserver-69f5d4f59b-9dw6n\" (UID: \"e97ff18c-9ca5-474c-b893-4e67487f341c\") " pod="calico-apiserver/calico-apiserver-69f5d4f59b-9dw6n" Jan 29 11:45:28.948509 kubelet[2482]: I0129 11:45:28.948449 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e8e6d53-1dde-47b7-be75-dd444d38411e-config-volume\") pod \"coredns-6f6b679f8f-8dvxp\" (UID: \"9e8e6d53-1dde-47b7-be75-dd444d38411e\") " pod="kube-system/coredns-6f6b679f8f-8dvxp" Jan 29 11:45:28.948509 kubelet[2482]: I0129 11:45:28.948469 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8232d851-127c-45d7-b458-e6bfcdd82418-config-volume\") pod \"coredns-6f6b679f8f-27tjt\" (UID: \"8232d851-127c-45d7-b458-e6bfcdd82418\") " pod="kube-system/coredns-6f6b679f8f-27tjt" Jan 29 11:45:28.948509 kubelet[2482]: I0129 11:45:28.948489 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6vl\" (UniqueName: \"kubernetes.io/projected/8232d851-127c-45d7-b458-e6bfcdd82418-kube-api-access-4r6vl\") pod \"coredns-6f6b679f8f-27tjt\" (UID: \"8232d851-127c-45d7-b458-e6bfcdd82418\") " pod="kube-system/coredns-6f6b679f8f-27tjt" Jan 29 11:45:28.948578 kubelet[2482]: I0129 11:45:28.948505 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46s6j\" (UniqueName: \"kubernetes.io/projected/341ae40d-b2cd-48be-89df-3aae61760d67-kube-api-access-46s6j\") pod \"calico-apiserver-69f5d4f59b-p5w8p\" (UID: \"341ae40d-b2cd-48be-89df-3aae61760d67\") " pod="calico-apiserver/calico-apiserver-69f5d4f59b-p5w8p" Jan 29 11:45:28.948578 kubelet[2482]: I0129 11:45:28.948528 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/341ae40d-b2cd-48be-89df-3aae61760d67-calico-apiserver-certs\") pod \"calico-apiserver-69f5d4f59b-p5w8p\" (UID: \"341ae40d-b2cd-48be-89df-3aae61760d67\") " pod="calico-apiserver/calico-apiserver-69f5d4f59b-p5w8p" Jan 29 11:45:29.154543 containerd[1466]: time="2025-01-29T11:45:29.154512495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f5d4f59b-p5w8p,Uid:341ae40d-b2cd-48be-89df-3aae61760d67,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:45:29.165993 kubelet[2482]: E0129 11:45:29.165785 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:29.166338 containerd[1466]: time="2025-01-29T11:45:29.166314400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-27tjt,Uid:8232d851-127c-45d7-b458-e6bfcdd82418,Namespace:kube-system,Attempt:0,}" Jan 29 11:45:29.166791 kubelet[2482]: E0129 11:45:29.166486 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:29.167213 containerd[1466]: time="2025-01-29T11:45:29.167140722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8dvxp,Uid:9e8e6d53-1dde-47b7-be75-dd444d38411e,Namespace:kube-system,Attempt:0,}" Jan 29 11:45:29.173262 containerd[1466]: time="2025-01-29T11:45:29.173206778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f5d4f59b-9dw6n,Uid:e97ff18c-9ca5-474c-b893-4e67487f341c,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:45:29.178525 containerd[1466]: time="2025-01-29T11:45:29.178491729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749bdc5899-6mcr2,Uid:3e8ec329-4c59-4738-98e7-f420cb51aefa,Namespace:calico-system,Attempt:0,}" Jan 29 11:45:29.269415 containerd[1466]: time="2025-01-29T11:45:29.269354210Z" level=error msg="Failed to destroy network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.270247 containerd[1466]: time="2025-01-29T11:45:29.270222052Z" level=error msg="encountered an error cleaning up failed sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.270295 containerd[1466]: time="2025-01-29T11:45:29.270273141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f5d4f59b-p5w8p,Uid:341ae40d-b2cd-48be-89df-3aae61760d67,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.271863 kubelet[2482]: E0129 11:45:29.271805 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.272026 kubelet[2482]: E0129 11:45:29.271896 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69f5d4f59b-p5w8p" Jan 29 11:45:29.272255 containerd[1466]: time="2025-01-29T11:45:29.272174674Z" level=error msg="Failed to destroy network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.272953 containerd[1466]: time="2025-01-29T11:45:29.272859062Z" level=error msg="encountered an error cleaning up failed sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.272953 containerd[1466]: time="2025-01-29T11:45:29.272931582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8dvxp,Uid:9e8e6d53-1dde-47b7-be75-dd444d38411e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.273201 kubelet[2482]: E0129 11:45:29.273150 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.273201 kubelet[2482]: E0129 11:45:29.273196 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8dvxp" Jan 29 11:45:29.273892 kubelet[2482]: E0129 11:45:29.273869 2482 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8dvxp" Jan 29 11:45:29.274136 kubelet[2482]: E0129 11:45:29.274017 2482 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69f5d4f59b-p5w8p" Jan 29 11:45:29.276529 kubelet[2482]: E0129 11:45:29.276441 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69f5d4f59b-p5w8p_calico-apiserver(341ae40d-b2cd-48be-89df-3aae61760d67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69f5d4f59b-p5w8p_calico-apiserver(341ae40d-b2cd-48be-89df-3aae61760d67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69f5d4f59b-p5w8p" podUID="341ae40d-b2cd-48be-89df-3aae61760d67" Jan 29 11:45:29.276529 kubelet[2482]: E0129 11:45:29.276505 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8dvxp_kube-system(9e8e6d53-1dde-47b7-be75-dd444d38411e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8dvxp_kube-system(9e8e6d53-1dde-47b7-be75-dd444d38411e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8dvxp" podUID="9e8e6d53-1dde-47b7-be75-dd444d38411e" Jan 29 11:45:29.287362 containerd[1466]: time="2025-01-29T11:45:29.287032676Z" level=error msg="Failed to destroy network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.287721 containerd[1466]: time="2025-01-29T11:45:29.287670615Z" level=error msg="Failed to destroy network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.288025 containerd[1466]: time="2025-01-29T11:45:29.287940515Z" level=error msg="encountered an error cleaning up failed sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.288025 containerd[1466]: time="2025-01-29T11:45:29.287986184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f5d4f59b-9dw6n,Uid:e97ff18c-9ca5-474c-b893-4e67487f341c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.288250 containerd[1466]: time="2025-01-29T11:45:29.288144589Z" level=error msg="encountered an error cleaning up failed sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.288250 containerd[1466]: time="2025-01-29T11:45:29.288214854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749bdc5899-6mcr2,Uid:3e8ec329-4c59-4738-98e7-f420cb51aefa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.288571 kubelet[2482]: E0129 11:45:29.288524 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.288642 kubelet[2482]: E0129 11:45:29.288591 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69f5d4f59b-9dw6n" Jan 29 11:45:29.288642 kubelet[2482]: E0129 11:45:29.288610 2482 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69f5d4f59b-9dw6n" Jan 29 11:45:29.288714 kubelet[2482]: E0129 11:45:29.288682 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69f5d4f59b-9dw6n_calico-apiserver(e97ff18c-9ca5-474c-b893-4e67487f341c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69f5d4f59b-9dw6n_calico-apiserver(e97ff18c-9ca5-474c-b893-4e67487f341c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69f5d4f59b-9dw6n" podUID="e97ff18c-9ca5-474c-b893-4e67487f341c" Jan 29 11:45:29.288821 kubelet[2482]: E0129 11:45:29.288777 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.288896 kubelet[2482]: E0129 11:45:29.288877 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-749bdc5899-6mcr2" Jan 29 11:45:29.289469 kubelet[2482]: E0129 11:45:29.288902 2482 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-749bdc5899-6mcr2" Jan 29 11:45:29.289544 kubelet[2482]: E0129 11:45:29.289502 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-749bdc5899-6mcr2_calico-system(3e8ec329-4c59-4738-98e7-f420cb51aefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-749bdc5899-6mcr2_calico-system(3e8ec329-4c59-4738-98e7-f420cb51aefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-749bdc5899-6mcr2" podUID="3e8ec329-4c59-4738-98e7-f420cb51aefa" Jan 29 11:45:29.296781 containerd[1466]: time="2025-01-29T11:45:29.296734907Z" level=error msg="Failed to destroy network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.297091 containerd[1466]: time="2025-01-29T11:45:29.297060354Z" level=error msg="encountered an error cleaning up failed sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.297127 containerd[1466]: time="2025-01-29T11:45:29.297101744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-27tjt,Uid:8232d851-127c-45d7-b458-e6bfcdd82418,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.297276 kubelet[2482]: E0129 11:45:29.297252 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.297315 kubelet[2482]: E0129 11:45:29.297292 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-27tjt" Jan 29 11:45:29.297358 kubelet[2482]: E0129 11:45:29.297316 2482 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-27tjt" Jan 29 11:45:29.297406 kubelet[2482]: E0129 11:45:29.297352 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-27tjt_kube-system(8232d851-127c-45d7-b458-e6bfcdd82418)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-27tjt_kube-system(8232d851-127c-45d7-b458-e6bfcdd82418)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-27tjt" podUID="8232d851-127c-45d7-b458-e6bfcdd82418" Jan 29 11:45:29.369023 kubelet[2482]: I0129 11:45:29.368813 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:45:29.369789 kubelet[2482]: I0129 11:45:29.369769 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:45:29.370794 containerd[1466]: time="2025-01-29T11:45:29.370414757Z" level=info msg="StopPodSandbox for \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\"" Jan 29 11:45:29.370794 containerd[1466]: time="2025-01-29T11:45:29.370450847Z" level=info msg="StopPodSandbox for \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\"" Jan 29 11:45:29.370794 containerd[1466]: time="2025-01-29T11:45:29.370568444Z" level=info msg="Ensure that sandbox 738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea in task-service has been cleanup successfully" Jan 29 11:45:29.370794 containerd[1466]: time="2025-01-29T11:45:29.370578092Z" level=info msg="Ensure that sandbox 4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9 in task-service has been cleanup successfully" Jan 29 11:45:29.371025 kubelet[2482]: I0129 11:45:29.370964 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:45:29.371997 containerd[1466]: time="2025-01-29T11:45:29.371968310Z" level=info msg="StopPodSandbox for \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\"" Jan 29 11:45:29.372130 containerd[1466]: time="2025-01-29T11:45:29.372110243Z" level=info msg="Ensure that sandbox 8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551 in task-service has been cleanup successfully" Jan 29 11:45:29.372946 kubelet[2482]: I0129 11:45:29.372571 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:45:29.373304 containerd[1466]: time="2025-01-29T11:45:29.373280398Z" level=info msg="StopPodSandbox for \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\"" Jan 29 11:45:29.373813 containerd[1466]: time="2025-01-29T11:45:29.373778808Z" level=info msg="Ensure that sandbox 90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db in task-service has been cleanup successfully" Jan 29 11:45:29.376598 kubelet[2482]: E0129 11:45:29.376480 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:29.380442 containerd[1466]: time="2025-01-29T11:45:29.378727621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:45:29.380442 containerd[1466]: time="2025-01-29T11:45:29.379240239Z" level=info msg="StopPodSandbox for \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\"" Jan 29 11:45:29.380442 containerd[1466]: time="2025-01-29T11:45:29.379415116Z" level=info msg="Ensure that sandbox 41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64 in task-service has been cleanup successfully" Jan 29 11:45:29.380618 kubelet[2482]: I0129 11:45:29.378813 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:45:29.425817 containerd[1466]: time="2025-01-29T11:45:29.424417891Z" level=error msg="StopPodSandbox for \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\" failed" error="failed to destroy network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.426135 containerd[1466]: time="2025-01-29T11:45:29.424874481Z" level=error msg="StopPodSandbox for \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\" failed" error="failed to destroy network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.426351 kubelet[2482]: E0129 11:45:29.426309 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:45:29.426445 kubelet[2482]: E0129 11:45:29.426388 2482 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9"} Jan 29 11:45:29.426471 kubelet[2482]: E0129 11:45:29.426452 2482 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e97ff18c-9ca5-474c-b893-4e67487f341c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:45:29.426525 kubelet[2482]: E0129 11:45:29.426477 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e97ff18c-9ca5-474c-b893-4e67487f341c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69f5d4f59b-9dw6n" podUID="e97ff18c-9ca5-474c-b893-4e67487f341c" Jan 29 11:45:29.426525 kubelet[2482]: E0129 11:45:29.426509 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:45:29.426525 kubelet[2482]: E0129 11:45:29.426522 2482 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db"} Jan 29 11:45:29.426616 kubelet[2482]: E0129 11:45:29.426539 2482 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8232d851-127c-45d7-b458-e6bfcdd82418\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:45:29.426616 kubelet[2482]: E0129 11:45:29.426556 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8232d851-127c-45d7-b458-e6bfcdd82418\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-27tjt" podUID="8232d851-127c-45d7-b458-e6bfcdd82418" Jan 29 11:45:29.428424 containerd[1466]: time="2025-01-29T11:45:29.428377980Z" level=error msg="StopPodSandbox for \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\" failed" error="failed to destroy network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.428544 kubelet[2482]: E0129 11:45:29.428514 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:45:29.428544 kubelet[2482]: E0129 11:45:29.428569 2482 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea"} Jan 29 11:45:29.428635 kubelet[2482]: E0129 11:45:29.428592 2482 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"341ae40d-b2cd-48be-89df-3aae61760d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:45:29.428687 kubelet[2482]: E0129 11:45:29.428612 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"341ae40d-b2cd-48be-89df-3aae61760d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69f5d4f59b-p5w8p" podUID="341ae40d-b2cd-48be-89df-3aae61760d67" Jan 29 11:45:29.433899 containerd[1466]: time="2025-01-29T11:45:29.433440222Z" level=error msg="StopPodSandbox for \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\" failed" error="failed to destroy network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.433983 kubelet[2482]: E0129 11:45:29.433569 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:45:29.433983 kubelet[2482]: E0129 11:45:29.433597 2482 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551"} Jan 29 11:45:29.433983 kubelet[2482]: E0129 11:45:29.433620 2482 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e8e6d53-1dde-47b7-be75-dd444d38411e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:45:29.433983 kubelet[2482]: E0129 11:45:29.433640 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e8e6d53-1dde-47b7-be75-dd444d38411e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8dvxp" podUID="9e8e6d53-1dde-47b7-be75-dd444d38411e" Jan 29 11:45:29.440769 containerd[1466]: time="2025-01-29T11:45:29.440729996Z" level=error msg="StopPodSandbox for \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\" failed" error="failed to destroy network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:29.440894 kubelet[2482]: E0129 11:45:29.440863 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:45:29.440968 kubelet[2482]: E0129 11:45:29.440897 2482 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64"} Jan 29 11:45:29.440968 kubelet[2482]: E0129 11:45:29.440935 2482 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e8ec329-4c59-4738-98e7-f420cb51aefa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:45:29.440968 kubelet[2482]: E0129 11:45:29.440953 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e8ec329-4c59-4738-98e7-f420cb51aefa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-749bdc5899-6mcr2" podUID="3e8ec329-4c59-4738-98e7-f420cb51aefa" Jan 29 11:45:30.059525 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea-shm.mount: Deactivated successfully. Jan 29 11:45:30.310996 systemd[1]: Created slice kubepods-besteffort-podf43cd9c6_970c_4688_9f00_2800e91cf652.slice - libcontainer container kubepods-besteffort-podf43cd9c6_970c_4688_9f00_2800e91cf652.slice. Jan 29 11:45:30.314775 containerd[1466]: time="2025-01-29T11:45:30.314742454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sllk6,Uid:f43cd9c6-970c-4688-9f00-2800e91cf652,Namespace:calico-system,Attempt:0,}" Jan 29 11:45:30.374963 containerd[1466]: time="2025-01-29T11:45:30.374898388Z" level=error msg="Failed to destroy network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:30.375273 containerd[1466]: time="2025-01-29T11:45:30.375248331Z" level=error msg="encountered an error cleaning up failed sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:30.375321 containerd[1466]: time="2025-01-29T11:45:30.375300502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sllk6,Uid:f43cd9c6-970c-4688-9f00-2800e91cf652,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:30.375630 kubelet[2482]: E0129 11:45:30.375567 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:30.376011 kubelet[2482]: E0129 11:45:30.375642 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sllk6" Jan 29 11:45:30.376011 kubelet[2482]: E0129 11:45:30.375662 2482 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sllk6" Jan 29 11:45:30.376011 kubelet[2482]: E0129 11:45:30.375711 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sllk6_calico-system(f43cd9c6-970c-4688-9f00-2800e91cf652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sllk6_calico-system(f43cd9c6-970c-4688-9f00-2800e91cf652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:30.377517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e-shm.mount: Deactivated successfully. Jan 29 11:45:30.381634 kubelet[2482]: I0129 11:45:30.381600 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:45:30.382223 containerd[1466]: time="2025-01-29T11:45:30.382200169Z" level=info msg="StopPodSandbox for \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\"" Jan 29 11:45:30.382386 containerd[1466]: time="2025-01-29T11:45:30.382338225Z" level=info msg="Ensure that sandbox e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e in task-service has been cleanup successfully" Jan 29 11:45:30.411754 containerd[1466]: time="2025-01-29T11:45:30.411622811Z" level=error msg="StopPodSandbox for \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\" failed" error="failed to destroy network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:45:30.411974 kubelet[2482]: E0129 11:45:30.411887 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:45:30.411974 kubelet[2482]: E0129 11:45:30.411950 2482 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e"} Jan 29 11:45:30.412071 kubelet[2482]: E0129 11:45:30.411981 2482 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f43cd9c6-970c-4688-9f00-2800e91cf652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:45:30.412071 kubelet[2482]: E0129 11:45:30.412003 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f43cd9c6-970c-4688-9f00-2800e91cf652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sllk6" podUID="f43cd9c6-970c-4688-9f00-2800e91cf652" Jan 29 11:45:32.810557 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:59106.service - OpenSSH per-connection server daemon (10.0.0.1:59106). Jan 29 11:45:32.866671 sshd[3624]: Accepted publickey for core from 10.0.0.1 port 59106 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:45:32.868682 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:45:32.873346 systemd-logind[1452]: New session 8 of user core. Jan 29 11:45:32.884048 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:45:33.015206 sshd[3624]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:33.019985 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:59106.service: Deactivated successfully. Jan 29 11:45:33.021976 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:45:33.022646 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:45:33.023688 systemd-logind[1452]: Removed session 8. Jan 29 11:45:36.358276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965716878.mount: Deactivated successfully. Jan 29 11:45:36.641261 containerd[1466]: time="2025-01-29T11:45:36.641200472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:36.642076 containerd[1466]: time="2025-01-29T11:45:36.642016445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:45:36.645586 containerd[1466]: time="2025-01-29T11:45:36.645529311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.266763425s" Jan 29 11:45:36.645586 containerd[1466]: time="2025-01-29T11:45:36.645572053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:45:36.654296 containerd[1466]: time="2025-01-29T11:45:36.654248355Z" level=info msg="CreateContainer within sandbox \"f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:45:36.676148 containerd[1466]: time="2025-01-29T11:45:36.676085355Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:36.676907 containerd[1466]: time="2025-01-29T11:45:36.676867002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:36.678288 containerd[1466]: time="2025-01-29T11:45:36.678227657Z" level=info msg="CreateContainer within sandbox \"f6d1f9716cfcd4261c04d36d08ce0579666f3d9ffbc22bb8060a585b5a4070c8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"60706712b7076901d0b3c5edd0d46327889327c8b5dc0e229ccf15317f931d7f\"" Jan 29 11:45:36.678800 containerd[1466]: time="2025-01-29T11:45:36.678769715Z" level=info msg="StartContainer for \"60706712b7076901d0b3c5edd0d46327889327c8b5dc0e229ccf15317f931d7f\"" Jan 29 11:45:36.754140 systemd[1]: Started cri-containerd-60706712b7076901d0b3c5edd0d46327889327c8b5dc0e229ccf15317f931d7f.scope - libcontainer container 60706712b7076901d0b3c5edd0d46327889327c8b5dc0e229ccf15317f931d7f. Jan 29 11:45:37.022255 containerd[1466]: time="2025-01-29T11:45:37.022095012Z" level=info msg="StartContainer for \"60706712b7076901d0b3c5edd0d46327889327c8b5dc0e229ccf15317f931d7f\" returns successfully" Jan 29 11:45:37.051529 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:45:37.051668 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:45:37.397539 kubelet[2482]: E0129 11:45:37.397475 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:37.428169 systemd[1]: run-containerd-runc-k8s.io-60706712b7076901d0b3c5edd0d46327889327c8b5dc0e229ccf15317f931d7f-runc.v9TwfY.mount: Deactivated successfully. Jan 29 11:45:37.467482 kubelet[2482]: I0129 11:45:37.467424 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-25jw6" podStartSLOduration=2.265649627 podStartE2EDuration="22.467405219s" podCreationTimestamp="2025-01-29 11:45:15 +0000 UTC" firstStartedPulling="2025-01-29 11:45:16.444475001 +0000 UTC m=+12.297289810" lastFinishedPulling="2025-01-29 11:45:36.646230583 +0000 UTC m=+32.499045402" observedRunningTime="2025-01-29 11:45:37.467008971 +0000 UTC m=+33.319823790" watchObservedRunningTime="2025-01-29 11:45:37.467405219 +0000 UTC m=+33.320220028" Jan 29 11:45:38.026548 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:59114.service - OpenSSH per-connection server daemon (10.0.0.1:59114). Jan 29 11:45:38.079488 sshd[3734]: Accepted publickey for core from 10.0.0.1 port 59114 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:45:38.080911 sshd[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:45:38.084619 systemd-logind[1452]: New session 9 of user core. Jan 29 11:45:38.099055 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:45:38.219500 sshd[3734]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:38.222671 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:59114.service: Deactivated successfully. Jan 29 11:45:38.224777 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:45:38.226261 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:45:38.228199 systemd-logind[1452]: Removed session 9. Jan 29 11:45:38.400532 kubelet[2482]: E0129 11:45:38.399636 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:38.432291 systemd[1]: run-containerd-runc-k8s.io-60706712b7076901d0b3c5edd0d46327889327c8b5dc0e229ccf15317f931d7f-runc.aC4vgO.mount: Deactivated successfully. Jan 29 11:45:41.302885 containerd[1466]: time="2025-01-29T11:45:41.302810164Z" level=info msg="StopPodSandbox for \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\"" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.357 [INFO][3938] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.357 [INFO][3938] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" iface="eth0" netns="/var/run/netns/cni-650bb946-cecd-0c33-2a61-8602d2f8ec2c" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.357 [INFO][3938] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" iface="eth0" netns="/var/run/netns/cni-650bb946-cecd-0c33-2a61-8602d2f8ec2c" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.358 [INFO][3938] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" iface="eth0" netns="/var/run/netns/cni-650bb946-cecd-0c33-2a61-8602d2f8ec2c" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.358 [INFO][3938] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.358 [INFO][3938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.407 [INFO][3946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.408 [INFO][3946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.408 [INFO][3946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.414 [WARNING][3946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.414 [INFO][3946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.416 [INFO][3946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:41.422004 containerd[1466]: 2025-01-29 11:45:41.418 [INFO][3938] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:45:41.422624 containerd[1466]: time="2025-01-29T11:45:41.422138860Z" level=info msg="TearDown network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\" successfully" Jan 29 11:45:41.422624 containerd[1466]: time="2025-01-29T11:45:41.422165841Z" level=info msg="StopPodSandbox for \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\" returns successfully" Jan 29 11:45:41.423438 kubelet[2482]: E0129 11:45:41.423250 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:41.424698 containerd[1466]: time="2025-01-29T11:45:41.423943916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8dvxp,Uid:9e8e6d53-1dde-47b7-be75-dd444d38411e,Namespace:kube-system,Attempt:1,}" Jan 29 11:45:41.425274 systemd[1]: run-netns-cni\x2d650bb946\x2dcecd\x2d0c33\x2d2a61\x2d8602d2f8ec2c.mount: Deactivated successfully. Jan 29 11:45:41.648425 systemd-networkd[1401]: calibade9da93ac: Link UP Jan 29 11:45:41.649339 systemd-networkd[1401]: calibade9da93ac: Gained carrier Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.549 [INFO][3955] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.558 [INFO][3955] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0 coredns-6f6b679f8f- kube-system 9e8e6d53-1dde-47b7-be75-dd444d38411e 816 0 2025-01-29 11:45:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-8dvxp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibade9da93ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Namespace="kube-system" Pod="coredns-6f6b679f8f-8dvxp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8dvxp-" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.559 [INFO][3955] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Namespace="kube-system" Pod="coredns-6f6b679f8f-8dvxp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.588 [INFO][3968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" HandleID="k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.602 [INFO][3968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" HandleID="k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdd10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-8dvxp", "timestamp":"2025-01-29 11:45:41.588858796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.602 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.602 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.602 [INFO][3968] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.605 [INFO][3968] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.612 [INFO][3968] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.616 [INFO][3968] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.619 [INFO][3968] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.621 [INFO][3968] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.621 [INFO][3968] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.622 [INFO][3968] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667 Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.627 [INFO][3968] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.634 [INFO][3968] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.634 [INFO][3968] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" host="localhost" Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.634 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:41.668436 containerd[1466]: 2025-01-29 11:45:41.634 [INFO][3968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" HandleID="k8s-pod-network.9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.669363 containerd[1466]: 2025-01-29 11:45:41.640 [INFO][3955] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Namespace="kube-system" Pod="coredns-6f6b679f8f-8dvxp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9e8e6d53-1dde-47b7-be75-dd444d38411e", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-8dvxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibade9da93ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:41.669363 containerd[1466]: 2025-01-29 11:45:41.640 [INFO][3955] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Namespace="kube-system" Pod="coredns-6f6b679f8f-8dvxp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.669363 containerd[1466]: 2025-01-29 11:45:41.640 [INFO][3955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibade9da93ac ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Namespace="kube-system" Pod="coredns-6f6b679f8f-8dvxp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.669363 containerd[1466]: 2025-01-29 11:45:41.649 [INFO][3955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Namespace="kube-system" Pod="coredns-6f6b679f8f-8dvxp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.669363 containerd[1466]: 2025-01-29 11:45:41.650 [INFO][3955] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Namespace="kube-system" Pod="coredns-6f6b679f8f-8dvxp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9e8e6d53-1dde-47b7-be75-dd444d38411e", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667", Pod:"coredns-6f6b679f8f-8dvxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibade9da93ac", MAC:"ee:59:ab:46:47:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:41.669363 containerd[1466]: 2025-01-29 11:45:41.661 [INFO][3955] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667" Namespace="kube-system" Pod="coredns-6f6b679f8f-8dvxp" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:45:41.705492 containerd[1466]: time="2025-01-29T11:45:41.704653495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:41.705492 containerd[1466]: time="2025-01-29T11:45:41.705454425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:41.705717 containerd[1466]: time="2025-01-29T11:45:41.705473501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:41.705717 containerd[1466]: time="2025-01-29T11:45:41.705586276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:41.731196 systemd[1]: Started cri-containerd-9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667.scope - libcontainer container 9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667. Jan 29 11:45:41.744752 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:45:41.771353 containerd[1466]: time="2025-01-29T11:45:41.771293470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8dvxp,Uid:9e8e6d53-1dde-47b7-be75-dd444d38411e,Namespace:kube-system,Attempt:1,} returns sandbox id \"9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667\"" Jan 29 11:45:41.772218 kubelet[2482]: E0129 11:45:41.772193 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:41.774056 containerd[1466]: time="2025-01-29T11:45:41.774024513Z" level=info msg="CreateContainer within sandbox \"9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:45:41.808136 containerd[1466]: time="2025-01-29T11:45:41.808094823Z" level=info msg="CreateContainer within sandbox \"9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1ced484abd06e7d13a05bb63a6496cf1c0f11e7b3e380f92acf3bd2389f7fbe\"" Jan 29 11:45:41.808695 containerd[1466]: time="2025-01-29T11:45:41.808662306Z" level=info msg="StartContainer for \"d1ced484abd06e7d13a05bb63a6496cf1c0f11e7b3e380f92acf3bd2389f7fbe\"" Jan 29 11:45:41.840106 systemd[1]: Started cri-containerd-d1ced484abd06e7d13a05bb63a6496cf1c0f11e7b3e380f92acf3bd2389f7fbe.scope - libcontainer container d1ced484abd06e7d13a05bb63a6496cf1c0f11e7b3e380f92acf3bd2389f7fbe. Jan 29 11:45:41.870569 containerd[1466]: time="2025-01-29T11:45:41.870508840Z" level=info msg="StartContainer for \"d1ced484abd06e7d13a05bb63a6496cf1c0f11e7b3e380f92acf3bd2389f7fbe\" returns successfully" Jan 29 11:45:42.430880 kubelet[2482]: E0129 11:45:42.430660 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:42.440256 kubelet[2482]: I0129 11:45:42.440059 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8dvxp" podStartSLOduration=33.440012844 podStartE2EDuration="33.440012844s" podCreationTimestamp="2025-01-29 11:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:45:42.439342886 +0000 UTC m=+38.292157725" watchObservedRunningTime="2025-01-29 11:45:42.440012844 +0000 UTC m=+38.292827663" Jan 29 11:45:42.880240 kubelet[2482]: I0129 11:45:42.880178 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:45:42.880778 kubelet[2482]: E0129 11:45:42.880739 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:43.235481 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:45974.service - OpenSSH per-connection server daemon (10.0.0.1:45974). Jan 29 11:45:43.278347 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 45974 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:45:43.280128 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:45:43.284307 systemd-logind[1452]: New session 10 of user core. Jan 29 11:45:43.290050 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:45:43.302701 containerd[1466]: time="2025-01-29T11:45:43.302403378Z" level=info msg="StopPodSandbox for \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\"" Jan 29 11:45:43.302701 containerd[1466]: time="2025-01-29T11:45:43.302434136Z" level=info msg="StopPodSandbox for \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\"" Jan 29 11:45:43.432876 kubelet[2482]: E0129 11:45:43.432814 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:43.433441 kubelet[2482]: E0129 11:45:43.433062 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:43.511154 sshd[4118]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:43.516126 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:45:43.517435 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:45974.service: Deactivated successfully. Jan 29 11:45:43.519815 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:45:43.520999 systemd-logind[1452]: Removed session 10. Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.505 [INFO][4154] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.506 [INFO][4154] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" iface="eth0" netns="/var/run/netns/cni-ce22dc4a-68c0-966f-8b68-61e4eb64f745" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.506 [INFO][4154] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" iface="eth0" netns="/var/run/netns/cni-ce22dc4a-68c0-966f-8b68-61e4eb64f745" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.506 [INFO][4154] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" iface="eth0" netns="/var/run/netns/cni-ce22dc4a-68c0-966f-8b68-61e4eb64f745" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.506 [INFO][4154] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.506 [INFO][4154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.528 [INFO][4179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.529 [INFO][4179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.529 [INFO][4179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.568 [WARNING][4179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.568 [INFO][4179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.585 [INFO][4179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:43.590469 containerd[1466]: 2025-01-29 11:45:43.588 [INFO][4154] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:45:43.591967 containerd[1466]: time="2025-01-29T11:45:43.591809075Z" level=info msg="TearDown network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\" successfully" Jan 29 11:45:43.591967 containerd[1466]: time="2025-01-29T11:45:43.591838020Z" level=info msg="StopPodSandbox for \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\" returns successfully" Jan 29 11:45:43.592574 containerd[1466]: time="2025-01-29T11:45:43.592553755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f5d4f59b-p5w8p,Uid:341ae40d-b2cd-48be-89df-3aae61760d67,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:45:43.593569 systemd[1]: run-netns-cni\x2dce22dc4a\x2d68c0\x2d966f\x2d8b68\x2d61e4eb64f745.mount: Deactivated successfully. Jan 29 11:45:43.668092 systemd-networkd[1401]: calibade9da93ac: Gained IPv6LL Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.570 [INFO][4153] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.570 [INFO][4153] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" iface="eth0" netns="/var/run/netns/cni-015f4106-847c-9ec1-296b-fb5de1d75d20" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.570 [INFO][4153] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" iface="eth0" netns="/var/run/netns/cni-015f4106-847c-9ec1-296b-fb5de1d75d20" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.571 [INFO][4153] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" iface="eth0" netns="/var/run/netns/cni-015f4106-847c-9ec1-296b-fb5de1d75d20" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.571 [INFO][4153] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.571 [INFO][4153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.592 [INFO][4191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.592 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.592 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.727 [WARNING][4191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.727 [INFO][4191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.750 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:43.755771 containerd[1466]: 2025-01-29 11:45:43.752 [INFO][4153] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:45:43.756830 containerd[1466]: time="2025-01-29T11:45:43.756266233Z" level=info msg="TearDown network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\" successfully" Jan 29 11:45:43.756830 containerd[1466]: time="2025-01-29T11:45:43.756292583Z" level=info msg="StopPodSandbox for \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\" returns successfully" Jan 29 11:45:43.759082 containerd[1466]: time="2025-01-29T11:45:43.758852296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sllk6,Uid:f43cd9c6-970c-4688-9f00-2800e91cf652,Namespace:calico-system,Attempt:1,}" Jan 29 11:45:43.760775 systemd[1]: run-netns-cni\x2d015f4106\x2d847c\x2d9ec1\x2d296b\x2dfb5de1d75d20.mount: Deactivated successfully. Jan 29 11:45:43.954950 kernel: bpftool[4255]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:45:44.225195 systemd-networkd[1401]: vxlan.calico: Link UP Jan 29 11:45:44.225205 systemd-networkd[1401]: vxlan.calico: Gained carrier Jan 29 11:45:44.305217 containerd[1466]: time="2025-01-29T11:45:44.305170771Z" level=info msg="StopPodSandbox for \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\"" Jan 29 11:45:44.306452 containerd[1466]: time="2025-01-29T11:45:44.305743373Z" level=info msg="StopPodSandbox for \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\"" Jan 29 11:45:44.404584 systemd-networkd[1401]: cali54601cd9cbd: Link UP Jan 29 11:45:44.409857 systemd-networkd[1401]: cali54601cd9cbd: Gained carrier Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.139 [INFO][4257] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sllk6-eth0 csi-node-driver- calico-system f43cd9c6-970c-4688-9f00-2800e91cf652 857 0 2025-01-29 11:45:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sllk6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali54601cd9cbd [] []}} ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Namespace="calico-system" Pod="csi-node-driver-sllk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--sllk6-" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.139 [INFO][4257] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Namespace="calico-system" Pod="csi-node-driver-sllk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.182 [INFO][4283] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" HandleID="k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.196 [INFO][4283] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" HandleID="k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003916a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sllk6", "timestamp":"2025-01-29 11:45:44.182511578 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.196 [INFO][4283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.196 [INFO][4283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.196 [INFO][4283] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.201 [INFO][4283] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.293 [INFO][4283] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.298 [INFO][4283] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.301 [INFO][4283] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.303 [INFO][4283] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.304 [INFO][4283] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.307 [INFO][4283] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01 Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.373 [INFO][4283] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.382 [INFO][4283] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.382 [INFO][4283] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" host="localhost" Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.382 [INFO][4283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:44.431950 containerd[1466]: 2025-01-29 11:45:44.382 [INFO][4283] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" HandleID="k8s-pod-network.a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:44.433146 containerd[1466]: 2025-01-29 11:45:44.393 [INFO][4257] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Namespace="calico-system" Pod="csi-node-driver-sllk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--sllk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sllk6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f43cd9c6-970c-4688-9f00-2800e91cf652", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sllk6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54601cd9cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:44.433146 containerd[1466]: 2025-01-29 11:45:44.393 [INFO][4257] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Namespace="calico-system" Pod="csi-node-driver-sllk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:44.433146 containerd[1466]: 2025-01-29 11:45:44.393 [INFO][4257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54601cd9cbd ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Namespace="calico-system" Pod="csi-node-driver-sllk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:44.433146 containerd[1466]: 2025-01-29 11:45:44.407 [INFO][4257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Namespace="calico-system" Pod="csi-node-driver-sllk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:44.433146 containerd[1466]: 2025-01-29 11:45:44.413 [INFO][4257] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Namespace="calico-system" Pod="csi-node-driver-sllk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--sllk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sllk6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f43cd9c6-970c-4688-9f00-2800e91cf652", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01", Pod:"csi-node-driver-sllk6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54601cd9cbd", MAC:"42:13:ce:13:d0:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:44.433146 containerd[1466]: 2025-01-29 11:45:44.426 [INFO][4257] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01" Namespace="calico-system" Pod="csi-node-driver-sllk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:45:44.435744 kubelet[2482]: E0129 11:45:44.435713 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:44.496312 containerd[1466]: time="2025-01-29T11:45:44.495359959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:44.496312 containerd[1466]: time="2025-01-29T11:45:44.495465861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:44.496312 containerd[1466]: time="2025-01-29T11:45:44.495583966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:44.496312 containerd[1466]: time="2025-01-29T11:45:44.495770141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:44.517154 systemd[1]: Started cri-containerd-a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01.scope - libcontainer container a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01. Jan 29 11:45:44.530650 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:45:44.545794 containerd[1466]: time="2025-01-29T11:45:44.545748096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sllk6,Uid:f43cd9c6-970c-4688-9f00-2800e91cf652,Namespace:calico-system,Attempt:1,} returns sandbox id \"a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01\"" Jan 29 11:45:44.547392 containerd[1466]: time="2025-01-29T11:45:44.547354860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:45:44.562532 systemd-networkd[1401]: cali60a9e6a1834: Link UP Jan 29 11:45:44.562992 systemd-networkd[1401]: cali60a9e6a1834: Gained carrier Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.388 [INFO][4372] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.388 [INFO][4372] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" iface="eth0" netns="/var/run/netns/cni-5e2b1d4a-860a-6710-40a2-da3153a3aebe" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.389 [INFO][4372] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" iface="eth0" netns="/var/run/netns/cni-5e2b1d4a-860a-6710-40a2-da3153a3aebe" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.389 [INFO][4372] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" iface="eth0" netns="/var/run/netns/cni-5e2b1d4a-860a-6710-40a2-da3153a3aebe" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.389 [INFO][4372] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.389 [INFO][4372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.432 [INFO][4388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.432 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.553 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.562 [WARNING][4388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.562 [INFO][4388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.564 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:44.579120 containerd[1466]: 2025-01-29 11:45:44.572 [INFO][4372] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:45:44.581127 containerd[1466]: time="2025-01-29T11:45:44.580973980Z" level=info msg="TearDown network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\" successfully" Jan 29 11:45:44.581127 containerd[1466]: time="2025-01-29T11:45:44.581013074Z" level=info msg="StopPodSandbox for \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\" returns successfully" Jan 29 11:45:44.582789 containerd[1466]: time="2025-01-29T11:45:44.582740838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f5d4f59b-9dw6n,Uid:e97ff18c-9ca5-474c-b893-4e67487f341c,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.150 [INFO][4266] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0 calico-apiserver-69f5d4f59b- calico-apiserver 341ae40d-b2cd-48be-89df-3aae61760d67 856 0 2025-01-29 11:45:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69f5d4f59b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69f5d4f59b-p5w8p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali60a9e6a1834 [] []}} ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-p5w8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.151 [INFO][4266] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-p5w8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.194 [INFO][4289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" HandleID="k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.292 [INFO][4289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" HandleID="k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69f5d4f59b-p5w8p", "timestamp":"2025-01-29 11:45:44.19415215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.293 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.382 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.382 [INFO][4289] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.386 [INFO][4289] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.396 [INFO][4289] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.407 [INFO][4289] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.410 [INFO][4289] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.419 [INFO][4289] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.419 [INFO][4289] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.431 [INFO][4289] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602 Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.466 [INFO][4289] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.553 [INFO][4289] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.553 [INFO][4289] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" host="localhost" Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.553 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:44.583864 containerd[1466]: 2025-01-29 11:45:44.553 [INFO][4289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" HandleID="k8s-pod-network.cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:44.584693 containerd[1466]: 2025-01-29 11:45:44.558 [INFO][4266] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-p5w8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0", GenerateName:"calico-apiserver-69f5d4f59b-", Namespace:"calico-apiserver", SelfLink:"", UID:"341ae40d-b2cd-48be-89df-3aae61760d67", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f5d4f59b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69f5d4f59b-p5w8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60a9e6a1834", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:44.584693 containerd[1466]: 2025-01-29 11:45:44.558 [INFO][4266] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-p5w8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:44.584693 containerd[1466]: 2025-01-29 11:45:44.558 [INFO][4266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60a9e6a1834 ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-p5w8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:44.584693 containerd[1466]: 2025-01-29 11:45:44.563 [INFO][4266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-p5w8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:44.584693 containerd[1466]: 2025-01-29 11:45:44.563 [INFO][4266] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-p5w8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0", GenerateName:"calico-apiserver-69f5d4f59b-", Namespace:"calico-apiserver", SelfLink:"", UID:"341ae40d-b2cd-48be-89df-3aae61760d67", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f5d4f59b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602", Pod:"calico-apiserver-69f5d4f59b-p5w8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60a9e6a1834", MAC:"8a:d1:60:05:0f:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:44.584693 containerd[1466]: 2025-01-29 11:45:44.579 [INFO][4266] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-p5w8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.396 [INFO][4371] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.396 [INFO][4371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" iface="eth0" netns="/var/run/netns/cni-a8658ace-f189-ccc1-738c-99ecdd6706eb" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.396 [INFO][4371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" iface="eth0" netns="/var/run/netns/cni-a8658ace-f189-ccc1-738c-99ecdd6706eb" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.397 [INFO][4371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" iface="eth0" netns="/var/run/netns/cni-a8658ace-f189-ccc1-738c-99ecdd6706eb" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.397 [INFO][4371] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.397 [INFO][4371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.453 [INFO][4394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.453 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.564 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.571 [WARNING][4394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.571 [INFO][4394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.577 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:44.586018 containerd[1466]: 2025-01-29 11:45:44.582 [INFO][4371] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:45:44.586018 containerd[1466]: time="2025-01-29T11:45:44.585820859Z" level=info msg="TearDown network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\" successfully" Jan 29 11:45:44.586018 containerd[1466]: time="2025-01-29T11:45:44.585842581Z" level=info msg="StopPodSandbox for \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\" returns successfully" Jan 29 11:45:44.587858 containerd[1466]: time="2025-01-29T11:45:44.587316300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749bdc5899-6mcr2,Uid:3e8ec329-4c59-4738-98e7-f420cb51aefa,Namespace:calico-system,Attempt:1,}" Jan 29 11:45:44.595488 systemd[1]: run-netns-cni\x2da8658ace\x2df189\x2dccc1\x2d738c\x2d99ecdd6706eb.mount: Deactivated successfully. Jan 29 11:45:44.595646 systemd[1]: run-netns-cni\x2d5e2b1d4a\x2d860a\x2d6710\x2d40a2\x2dda3153a3aebe.mount: Deactivated successfully. Jan 29 11:45:44.634492 containerd[1466]: time="2025-01-29T11:45:44.633852505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:44.634492 containerd[1466]: time="2025-01-29T11:45:44.633948879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:44.634492 containerd[1466]: time="2025-01-29T11:45:44.633967975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:44.636341 containerd[1466]: time="2025-01-29T11:45:44.636095741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:44.664258 systemd[1]: Started cri-containerd-cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602.scope - libcontainer container cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602. Jan 29 11:45:44.684578 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:45:44.726906 containerd[1466]: time="2025-01-29T11:45:44.726854019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f5d4f59b-p5w8p,Uid:341ae40d-b2cd-48be-89df-3aae61760d67,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602\"" Jan 29 11:45:44.750309 systemd-networkd[1401]: calid479894893e: Link UP Jan 29 11:45:44.751254 systemd-networkd[1401]: calid479894893e: Gained carrier Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.668 [INFO][4511] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0 calico-apiserver-69f5d4f59b- calico-apiserver e97ff18c-9ca5-474c-b893-4e67487f341c 871 0 2025-01-29 11:45:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69f5d4f59b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69f5d4f59b-9dw6n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid479894893e [] []}} ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-9dw6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.668 [INFO][4511] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-9dw6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.708 [INFO][4569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" HandleID="k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.718 [INFO][4569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" HandleID="k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b3750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69f5d4f59b-9dw6n", "timestamp":"2025-01-29 11:45:44.708225566 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.718 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.718 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.718 [INFO][4569] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.720 [INFO][4569] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.724 [INFO][4569] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.730 [INFO][4569] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.732 [INFO][4569] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.735 [INFO][4569] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.735 [INFO][4569] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.736 [INFO][4569] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.739 [INFO][4569] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.744 [INFO][4569] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.744 [INFO][4569] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" host="localhost" Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.744 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:44.765185 containerd[1466]: 2025-01-29 11:45:44.744 [INFO][4569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" HandleID="k8s-pod-network.212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.766014 containerd[1466]: 2025-01-29 11:45:44.747 [INFO][4511] cni-plugin/k8s.go 386: Populated endpoint ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-9dw6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0", GenerateName:"calico-apiserver-69f5d4f59b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e97ff18c-9ca5-474c-b893-4e67487f341c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f5d4f59b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69f5d4f59b-9dw6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid479894893e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:44.766014 containerd[1466]: 2025-01-29 11:45:44.747 [INFO][4511] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-9dw6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.766014 containerd[1466]: 2025-01-29 11:45:44.747 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid479894893e ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-9dw6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.766014 containerd[1466]: 2025-01-29 11:45:44.750 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-9dw6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.766014 containerd[1466]: 2025-01-29 11:45:44.750 [INFO][4511] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-9dw6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0", GenerateName:"calico-apiserver-69f5d4f59b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e97ff18c-9ca5-474c-b893-4e67487f341c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f5d4f59b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d", Pod:"calico-apiserver-69f5d4f59b-9dw6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid479894893e", MAC:"16:ea:23:4b:db:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:44.766014 containerd[1466]: 2025-01-29 11:45:44.761 [INFO][4511] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d" Namespace="calico-apiserver" Pod="calico-apiserver-69f5d4f59b-9dw6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:45:44.787493 containerd[1466]: time="2025-01-29T11:45:44.787385002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:44.787493 containerd[1466]: time="2025-01-29T11:45:44.787438505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:44.787493 containerd[1466]: time="2025-01-29T11:45:44.787450868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:44.787805 containerd[1466]: time="2025-01-29T11:45:44.787745229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:44.811138 systemd[1]: Started cri-containerd-212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d.scope - libcontainer container 212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d. Jan 29 11:45:44.826401 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:45:44.852538 containerd[1466]: time="2025-01-29T11:45:44.852425462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69f5d4f59b-9dw6n,Uid:e97ff18c-9ca5-474c-b893-4e67487f341c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d\"" Jan 29 11:45:44.854336 systemd-networkd[1401]: cali9b90bcb0824: Link UP Jan 29 11:45:44.855430 systemd-networkd[1401]: cali9b90bcb0824: Gained carrier Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.681 [INFO][4534] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0 calico-kube-controllers-749bdc5899- calico-system 3e8ec329-4c59-4738-98e7-f420cb51aefa 872 0 2025-01-29 11:45:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:749bdc5899 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-749bdc5899-6mcr2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9b90bcb0824 [] []}} ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Namespace="calico-system" Pod="calico-kube-controllers-749bdc5899-6mcr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.681 [INFO][4534] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Namespace="calico-system" Pod="calico-kube-controllers-749bdc5899-6mcr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.726 [INFO][4575] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" HandleID="k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.819 [INFO][4575] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" HandleID="k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-749bdc5899-6mcr2", "timestamp":"2025-01-29 11:45:44.726161739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.819 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.819 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.819 [INFO][4575] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.821 [INFO][4575] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.826 [INFO][4575] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.829 [INFO][4575] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.831 [INFO][4575] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.834 [INFO][4575] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.834 [INFO][4575] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.835 [INFO][4575] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.840 [INFO][4575] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.847 [INFO][4575] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.847 [INFO][4575] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" host="localhost" Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.847 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:44.869028 containerd[1466]: 2025-01-29 11:45:44.847 [INFO][4575] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" HandleID="k8s-pod-network.f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.869612 containerd[1466]: 2025-01-29 11:45:44.851 [INFO][4534] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Namespace="calico-system" Pod="calico-kube-controllers-749bdc5899-6mcr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0", GenerateName:"calico-kube-controllers-749bdc5899-", Namespace:"calico-system", SelfLink:"", UID:"3e8ec329-4c59-4738-98e7-f420cb51aefa", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749bdc5899", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-749bdc5899-6mcr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b90bcb0824", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:44.869612 containerd[1466]: 2025-01-29 11:45:44.851 [INFO][4534] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Namespace="calico-system" Pod="calico-kube-controllers-749bdc5899-6mcr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.869612 containerd[1466]: 2025-01-29 11:45:44.851 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b90bcb0824 ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Namespace="calico-system" Pod="calico-kube-controllers-749bdc5899-6mcr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.869612 containerd[1466]: 2025-01-29 11:45:44.855 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Namespace="calico-system" Pod="calico-kube-controllers-749bdc5899-6mcr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.869612 containerd[1466]: 2025-01-29 11:45:44.856 [INFO][4534] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Namespace="calico-system" Pod="calico-kube-controllers-749bdc5899-6mcr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0", GenerateName:"calico-kube-controllers-749bdc5899-", Namespace:"calico-system", SelfLink:"", UID:"3e8ec329-4c59-4738-98e7-f420cb51aefa", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749bdc5899", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf", Pod:"calico-kube-controllers-749bdc5899-6mcr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b90bcb0824", MAC:"1e:94:1f:06:08:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:44.869612 containerd[1466]: 2025-01-29 11:45:44.865 [INFO][4534] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf" Namespace="calico-system" Pod="calico-kube-controllers-749bdc5899-6mcr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:45:44.889987 containerd[1466]: time="2025-01-29T11:45:44.889756178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:44.889987 containerd[1466]: time="2025-01-29T11:45:44.889823697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:44.889987 containerd[1466]: time="2025-01-29T11:45:44.889839277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:44.890191 containerd[1466]: time="2025-01-29T11:45:44.890112648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:44.911110 systemd[1]: Started cri-containerd-f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf.scope - libcontainer container f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf. Jan 29 11:45:44.924057 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:45:44.949261 containerd[1466]: time="2025-01-29T11:45:44.949200941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749bdc5899-6mcr2,Uid:3e8ec329-4c59-4738-98e7-f420cb51aefa,Namespace:calico-system,Attempt:1,} returns sandbox id \"f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf\"" Jan 29 11:45:45.302727 containerd[1466]: time="2025-01-29T11:45:45.302662194Z" level=info msg="StopPodSandbox for \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\"" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.349 [INFO][4718] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.349 [INFO][4718] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" iface="eth0" netns="/var/run/netns/cni-779c772b-5b86-94aa-fc09-1f9a714ddfea" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.350 [INFO][4718] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" iface="eth0" netns="/var/run/netns/cni-779c772b-5b86-94aa-fc09-1f9a714ddfea" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.350 [INFO][4718] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" iface="eth0" netns="/var/run/netns/cni-779c772b-5b86-94aa-fc09-1f9a714ddfea" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.350 [INFO][4718] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.350 [INFO][4718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.375 [INFO][4725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.375 [INFO][4725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.375 [INFO][4725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.380 [WARNING][4725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.380 [INFO][4725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.382 [INFO][4725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:45.388303 containerd[1466]: 2025-01-29 11:45:45.384 [INFO][4718] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:45:45.389024 containerd[1466]: time="2025-01-29T11:45:45.388443227Z" level=info msg="TearDown network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\" successfully" Jan 29 11:45:45.389024 containerd[1466]: time="2025-01-29T11:45:45.388471200Z" level=info msg="StopPodSandbox for \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\" returns successfully" Jan 29 11:45:45.389075 kubelet[2482]: E0129 11:45:45.388805 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:45.389337 containerd[1466]: time="2025-01-29T11:45:45.389284420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-27tjt,Uid:8232d851-127c-45d7-b458-e6bfcdd82418,Namespace:kube-system,Attempt:1,}" Jan 29 11:45:45.506990 systemd-networkd[1401]: cali699fe438aac: Link UP Jan 29 11:45:45.507687 systemd-networkd[1401]: cali699fe438aac: Gained carrier Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.437 [INFO][4733] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--27tjt-eth0 coredns-6f6b679f8f- kube-system 8232d851-127c-45d7-b458-e6bfcdd82418 893 0 2025-01-29 11:45:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-27tjt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali699fe438aac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Namespace="kube-system" Pod="coredns-6f6b679f8f-27tjt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--27tjt-" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.438 [INFO][4733] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Namespace="kube-system" Pod="coredns-6f6b679f8f-27tjt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.468 [INFO][4748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" HandleID="k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.475 [INFO][4748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" HandleID="k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f55d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-27tjt", "timestamp":"2025-01-29 11:45:45.468467961 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.476 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.476 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.476 [INFO][4748] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.478 [INFO][4748] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.481 [INFO][4748] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.484 [INFO][4748] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.486 [INFO][4748] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.488 [INFO][4748] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.488 [INFO][4748] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.489 [INFO][4748] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470 Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.493 [INFO][4748] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.500 [INFO][4748] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.500 [INFO][4748] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" host="localhost" Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.500 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:45:45.520462 containerd[1466]: 2025-01-29 11:45:45.500 [INFO][4748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" HandleID="k8s-pod-network.b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.520983 containerd[1466]: 2025-01-29 11:45:45.503 [INFO][4733] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Namespace="kube-system" Pod="coredns-6f6b679f8f-27tjt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--27tjt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8232d851-127c-45d7-b458-e6bfcdd82418", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-27tjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali699fe438aac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:45.520983 containerd[1466]: 2025-01-29 11:45:45.504 [INFO][4733] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Namespace="kube-system" Pod="coredns-6f6b679f8f-27tjt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.520983 containerd[1466]: 2025-01-29 11:45:45.504 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali699fe438aac ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Namespace="kube-system" Pod="coredns-6f6b679f8f-27tjt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.520983 containerd[1466]: 2025-01-29 11:45:45.507 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Namespace="kube-system" Pod="coredns-6f6b679f8f-27tjt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.520983 containerd[1466]: 2025-01-29 11:45:45.507 [INFO][4733] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Namespace="kube-system" Pod="coredns-6f6b679f8f-27tjt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--27tjt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8232d851-127c-45d7-b458-e6bfcdd82418", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470", Pod:"coredns-6f6b679f8f-27tjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali699fe438aac", MAC:"0e:99:b3:e0:01:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:45:45.520983 containerd[1466]: 2025-01-29 11:45:45.516 [INFO][4733] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470" Namespace="kube-system" Pod="coredns-6f6b679f8f-27tjt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:45:45.543404 containerd[1466]: time="2025-01-29T11:45:45.543247249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:45:45.543404 containerd[1466]: time="2025-01-29T11:45:45.543336620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:45:45.543404 containerd[1466]: time="2025-01-29T11:45:45.543352771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:45.544527 containerd[1466]: time="2025-01-29T11:45:45.544417930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:45:45.568068 systemd[1]: Started cri-containerd-b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470.scope - libcontainer container b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470. Jan 29 11:45:45.581020 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:45:45.600532 systemd[1]: run-netns-cni\x2d779c772b\x2d5b86\x2d94aa\x2dfc09\x2d1f9a714ddfea.mount: Deactivated successfully. Jan 29 11:45:45.606746 containerd[1466]: time="2025-01-29T11:45:45.606643176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-27tjt,Uid:8232d851-127c-45d7-b458-e6bfcdd82418,Namespace:kube-system,Attempt:1,} returns sandbox id \"b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470\"" Jan 29 11:45:45.607461 kubelet[2482]: E0129 11:45:45.607428 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:45.610149 containerd[1466]: time="2025-01-29T11:45:45.610068844Z" level=info msg="CreateContainer within sandbox \"b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:45:45.627783 containerd[1466]: time="2025-01-29T11:45:45.627199367Z" level=info msg="CreateContainer within sandbox \"b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0545a41c8d6b96c76e77e1202a7e871ed10a345a1c028cb7dec08b9c09d50190\"" Jan 29 11:45:45.628095 containerd[1466]: time="2025-01-29T11:45:45.627972361Z" level=info msg="StartContainer for \"0545a41c8d6b96c76e77e1202a7e871ed10a345a1c028cb7dec08b9c09d50190\"" Jan 29 11:45:45.662094 systemd[1]: Started cri-containerd-0545a41c8d6b96c76e77e1202a7e871ed10a345a1c028cb7dec08b9c09d50190.scope - libcontainer container 0545a41c8d6b96c76e77e1202a7e871ed10a345a1c028cb7dec08b9c09d50190. Jan 29 11:45:45.692873 containerd[1466]: time="2025-01-29T11:45:45.692826185Z" level=info msg="StartContainer for \"0545a41c8d6b96c76e77e1202a7e871ed10a345a1c028cb7dec08b9c09d50190\" returns successfully" Jan 29 11:45:45.844206 systemd-networkd[1401]: cali54601cd9cbd: Gained IPv6LL Jan 29 11:45:45.908148 systemd-networkd[1401]: cali60a9e6a1834: Gained IPv6LL Jan 29 11:45:45.972204 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Jan 29 11:45:46.101199 systemd-networkd[1401]: cali9b90bcb0824: Gained IPv6LL Jan 29 11:45:46.446720 kubelet[2482]: E0129 11:45:46.446683 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:46.467066 kubelet[2482]: I0129 11:45:46.466993 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-27tjt" podStartSLOduration=37.466972011 podStartE2EDuration="37.466972011s" podCreationTimestamp="2025-01-29 11:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:45:46.466833046 +0000 UTC m=+42.319647855" watchObservedRunningTime="2025-01-29 11:45:46.466972011 +0000 UTC m=+42.319786830" Jan 29 11:45:46.493416 containerd[1466]: time="2025-01-29T11:45:46.493351550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:46.494435 containerd[1466]: time="2025-01-29T11:45:46.494193113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:45:46.497627 containerd[1466]: time="2025-01-29T11:45:46.497596506Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:46.500328 containerd[1466]: time="2025-01-29T11:45:46.500303793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:46.500867 containerd[1466]: time="2025-01-29T11:45:46.500840826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.953445911s" Jan 29 11:45:46.500936 containerd[1466]: time="2025-01-29T11:45:46.500870994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:45:46.502059 containerd[1466]: time="2025-01-29T11:45:46.501982681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:45:46.503516 containerd[1466]: time="2025-01-29T11:45:46.503489702Z" level=info msg="CreateContainer within sandbox \"a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:45:46.521272 containerd[1466]: time="2025-01-29T11:45:46.521154582Z" level=info msg="CreateContainer within sandbox \"a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"45908dd1836f177cc17ca2a0508b3ef14594ffaadfbaf7695712bd7dc10a504e\"" Jan 29 11:45:46.522246 containerd[1466]: time="2025-01-29T11:45:46.522190174Z" level=info msg="StartContainer for \"45908dd1836f177cc17ca2a0508b3ef14594ffaadfbaf7695712bd7dc10a504e\"" Jan 29 11:45:46.548132 systemd-networkd[1401]: calid479894893e: Gained IPv6LL Jan 29 11:45:46.549120 systemd[1]: Started cri-containerd-45908dd1836f177cc17ca2a0508b3ef14594ffaadfbaf7695712bd7dc10a504e.scope - libcontainer container 45908dd1836f177cc17ca2a0508b3ef14594ffaadfbaf7695712bd7dc10a504e. Jan 29 11:45:46.584770 containerd[1466]: time="2025-01-29T11:45:46.584727239Z" level=info msg="StartContainer for \"45908dd1836f177cc17ca2a0508b3ef14594ffaadfbaf7695712bd7dc10a504e\" returns successfully" Jan 29 11:45:47.444132 systemd-networkd[1401]: cali699fe438aac: Gained IPv6LL Jan 29 11:45:47.450424 kubelet[2482]: E0129 11:45:47.450398 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:48.453717 kubelet[2482]: E0129 11:45:48.453668 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:45:48.524576 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:45980.service - OpenSSH per-connection server daemon (10.0.0.1:45980). Jan 29 11:45:48.566978 sshd[4901]: Accepted publickey for core from 10.0.0.1 port 45980 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:45:48.569089 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:45:48.574250 systemd-logind[1452]: New session 11 of user core. Jan 29 11:45:48.584125 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:45:48.713926 sshd[4901]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:48.724235 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:45980.service: Deactivated successfully. Jan 29 11:45:48.726294 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:45:48.728222 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:45:48.729610 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:45984.service - OpenSSH per-connection server daemon (10.0.0.1:45984). Jan 29 11:45:48.731194 systemd-logind[1452]: Removed session 11. Jan 29 11:45:48.780501 sshd[4917]: Accepted publickey for core from 10.0.0.1 port 45984 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:45:48.782286 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:45:48.786502 systemd-logind[1452]: New session 12 of user core. Jan 29 11:45:48.799219 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:45:49.053632 sshd[4917]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:49.066425 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:45984.service: Deactivated successfully. Jan 29 11:45:49.068942 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:45:49.070579 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:45:49.077469 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:45994.service - OpenSSH per-connection server daemon (10.0.0.1:45994). Jan 29 11:45:49.078846 systemd-logind[1452]: Removed session 12. Jan 29 11:45:49.119216 sshd[4933]: Accepted publickey for core from 10.0.0.1 port 45994 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:45:49.121165 sshd[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:45:49.126394 systemd-logind[1452]: New session 13 of user core. Jan 29 11:45:49.130041 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:45:49.277180 sshd[4933]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:49.282561 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:45994.service: Deactivated successfully. Jan 29 11:45:49.285741 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:45:49.286773 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:45:49.288029 systemd-logind[1452]: Removed session 13. Jan 29 11:45:49.571768 containerd[1466]: time="2025-01-29T11:45:49.571690358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:49.572470 containerd[1466]: time="2025-01-29T11:45:49.572418163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:45:49.573449 containerd[1466]: time="2025-01-29T11:45:49.573411102Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:49.575642 containerd[1466]: time="2025-01-29T11:45:49.575604507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:49.576343 containerd[1466]: time="2025-01-29T11:45:49.576294580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.074273014s" Jan 29 11:45:49.576343 containerd[1466]: time="2025-01-29T11:45:49.576337381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:45:49.577619 containerd[1466]: time="2025-01-29T11:45:49.577573313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:45:49.578794 containerd[1466]: time="2025-01-29T11:45:49.578751425Z" level=info msg="CreateContainer within sandbox \"cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:45:49.594037 containerd[1466]: time="2025-01-29T11:45:49.593984803Z" level=info msg="CreateContainer within sandbox \"cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"98b7796ccddbf4972196009c914f0e6a0d6033c1c9949b2e4362111bcce6bcbd\"" Jan 29 11:45:49.594697 containerd[1466]: time="2025-01-29T11:45:49.594552804Z" level=info msg="StartContainer for \"98b7796ccddbf4972196009c914f0e6a0d6033c1c9949b2e4362111bcce6bcbd\"" Jan 29 11:45:49.645200 systemd[1]: Started cri-containerd-98b7796ccddbf4972196009c914f0e6a0d6033c1c9949b2e4362111bcce6bcbd.scope - libcontainer container 98b7796ccddbf4972196009c914f0e6a0d6033c1c9949b2e4362111bcce6bcbd. Jan 29 11:45:49.950557 containerd[1466]: time="2025-01-29T11:45:49.950502713Z" level=info msg="StartContainer for \"98b7796ccddbf4972196009c914f0e6a0d6033c1c9949b2e4362111bcce6bcbd\" returns successfully" Jan 29 11:45:50.068356 containerd[1466]: time="2025-01-29T11:45:50.068276966Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:50.069172 containerd[1466]: time="2025-01-29T11:45:50.069124658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:45:50.071324 containerd[1466]: time="2025-01-29T11:45:50.071281502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 493.669836ms" Jan 29 11:45:50.071324 containerd[1466]: time="2025-01-29T11:45:50.071311690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:45:50.072336 containerd[1466]: time="2025-01-29T11:45:50.072209477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:45:50.073929 containerd[1466]: time="2025-01-29T11:45:50.073779594Z" level=info msg="CreateContainer within sandbox \"212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:45:50.100697 containerd[1466]: time="2025-01-29T11:45:50.100150031Z" level=info msg="CreateContainer within sandbox \"212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f69c2bdfdeb18c0b464afcbaa9e947f4b1243089972ef362c24800ff15a7fb81\"" Jan 29 11:45:50.102948 containerd[1466]: time="2025-01-29T11:45:50.101336308Z" level=info msg="StartContainer for \"f69c2bdfdeb18c0b464afcbaa9e947f4b1243089972ef362c24800ff15a7fb81\"" Jan 29 11:45:50.146161 systemd[1]: Started cri-containerd-f69c2bdfdeb18c0b464afcbaa9e947f4b1243089972ef362c24800ff15a7fb81.scope - libcontainer container f69c2bdfdeb18c0b464afcbaa9e947f4b1243089972ef362c24800ff15a7fb81. Jan 29 11:45:50.193206 containerd[1466]: time="2025-01-29T11:45:50.193143313Z" level=info msg="StartContainer for \"f69c2bdfdeb18c0b464afcbaa9e947f4b1243089972ef362c24800ff15a7fb81\" returns successfully" Jan 29 11:45:50.495965 kubelet[2482]: I0129 11:45:50.495156 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69f5d4f59b-p5w8p" podStartSLOduration=30.646828929 podStartE2EDuration="35.495133851s" podCreationTimestamp="2025-01-29 11:45:15 +0000 UTC" firstStartedPulling="2025-01-29 11:45:44.729009778 +0000 UTC m=+40.581824597" lastFinishedPulling="2025-01-29 11:45:49.5773147 +0000 UTC m=+45.430129519" observedRunningTime="2025-01-29 11:45:50.470347407 +0000 UTC m=+46.323162226" watchObservedRunningTime="2025-01-29 11:45:50.495133851 +0000 UTC m=+46.347948670" Jan 29 11:45:50.495965 kubelet[2482]: I0129 11:45:50.495246 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69f5d4f59b-9dw6n" podStartSLOduration=30.276885863 podStartE2EDuration="35.495241937s" podCreationTimestamp="2025-01-29 11:45:15 +0000 UTC" firstStartedPulling="2025-01-29 11:45:44.853692648 +0000 UTC m=+40.706507467" lastFinishedPulling="2025-01-29 11:45:50.072048721 +0000 UTC m=+45.924863541" observedRunningTime="2025-01-29 11:45:50.494312789 +0000 UTC m=+46.347127608" watchObservedRunningTime="2025-01-29 11:45:50.495241937 +0000 UTC m=+46.348056756" Jan 29 11:45:51.464622 kubelet[2482]: I0129 11:45:51.464585 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:45:52.712322 containerd[1466]: time="2025-01-29T11:45:52.712272898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:52.713066 containerd[1466]: time="2025-01-29T11:45:52.713030819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:45:52.714096 containerd[1466]: time="2025-01-29T11:45:52.714079082Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:52.716194 containerd[1466]: time="2025-01-29T11:45:52.716167182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:52.716731 containerd[1466]: time="2025-01-29T11:45:52.716704835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.644464228s" Jan 29 11:45:52.716759 containerd[1466]: time="2025-01-29T11:45:52.716731746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:45:52.717656 containerd[1466]: time="2025-01-29T11:45:52.717635705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:45:52.725476 containerd[1466]: time="2025-01-29T11:45:52.725440255Z" level=info msg="CreateContainer within sandbox \"f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:45:52.747010 containerd[1466]: time="2025-01-29T11:45:52.746966464Z" level=info msg="CreateContainer within sandbox \"f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5d5ccac49c741c2530e45e927f54bc43d4b487fbdde03a2714aa020c61af9dc8\"" Jan 29 11:45:52.747559 containerd[1466]: time="2025-01-29T11:45:52.747523754Z" level=info msg="StartContainer for \"5d5ccac49c741c2530e45e927f54bc43d4b487fbdde03a2714aa020c61af9dc8\"" Jan 29 11:45:52.777050 systemd[1]: Started cri-containerd-5d5ccac49c741c2530e45e927f54bc43d4b487fbdde03a2714aa020c61af9dc8.scope - libcontainer container 5d5ccac49c741c2530e45e927f54bc43d4b487fbdde03a2714aa020c61af9dc8. Jan 29 11:45:52.819377 containerd[1466]: time="2025-01-29T11:45:52.819339053Z" level=info msg="StartContainer for \"5d5ccac49c741c2530e45e927f54bc43d4b487fbdde03a2714aa020c61af9dc8\" returns successfully" Jan 29 11:45:53.539341 kubelet[2482]: I0129 11:45:53.539254 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-749bdc5899-6mcr2" podStartSLOduration=29.772321677 podStartE2EDuration="37.539235903s" podCreationTimestamp="2025-01-29 11:45:16 +0000 UTC" firstStartedPulling="2025-01-29 11:45:44.95053834 +0000 UTC m=+40.803353149" lastFinishedPulling="2025-01-29 11:45:52.717452566 +0000 UTC m=+48.570267375" observedRunningTime="2025-01-29 11:45:53.538775338 +0000 UTC m=+49.391590157" watchObservedRunningTime="2025-01-29 11:45:53.539235903 +0000 UTC m=+49.392050722" Jan 29 11:45:54.293239 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:55398.service - OpenSSH per-connection server daemon (10.0.0.1:55398). Jan 29 11:45:54.337188 sshd[5113]: Accepted publickey for core from 10.0.0.1 port 55398 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:45:54.339228 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:45:54.344662 systemd-logind[1452]: New session 14 of user core. Jan 29 11:45:54.358122 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:45:54.490974 sshd[5113]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:54.494183 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:55398.service: Deactivated successfully. Jan 29 11:45:54.496874 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:45:54.498864 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:45:54.500069 systemd-logind[1452]: Removed session 14. Jan 29 11:45:54.602948 containerd[1466]: time="2025-01-29T11:45:54.602788101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:54.604066 containerd[1466]: time="2025-01-29T11:45:54.604019190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:45:54.605430 containerd[1466]: time="2025-01-29T11:45:54.605373835Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:54.607580 containerd[1466]: time="2025-01-29T11:45:54.607542927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:45:54.608197 containerd[1466]: time="2025-01-29T11:45:54.608169708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.890506912s" Jan 29 11:45:54.608236 containerd[1466]: time="2025-01-29T11:45:54.608202471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:45:54.610951 containerd[1466]: time="2025-01-29T11:45:54.610901831Z" level=info msg="CreateContainer within sandbox \"a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:45:54.630682 containerd[1466]: time="2025-01-29T11:45:54.630638029Z" level=info msg="CreateContainer within sandbox \"a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"23113f28ea6ec0382baf0d123f66fbbcd91127d3500f526ec676b6598bb86a3f\"" Jan 29 11:45:54.631440 containerd[1466]: time="2025-01-29T11:45:54.631383888Z" level=info msg="StartContainer for \"23113f28ea6ec0382baf0d123f66fbbcd91127d3500f526ec676b6598bb86a3f\"" Jan 29 11:45:54.677047 systemd[1]: Started cri-containerd-23113f28ea6ec0382baf0d123f66fbbcd91127d3500f526ec676b6598bb86a3f.scope - libcontainer container 23113f28ea6ec0382baf0d123f66fbbcd91127d3500f526ec676b6598bb86a3f. Jan 29 11:45:54.847025 containerd[1466]: time="2025-01-29T11:45:54.846960061Z" level=info msg="StartContainer for \"23113f28ea6ec0382baf0d123f66fbbcd91127d3500f526ec676b6598bb86a3f\" returns successfully" Jan 29 11:45:55.373455 kubelet[2482]: I0129 11:45:55.373410 2482 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:45:55.373455 kubelet[2482]: I0129 11:45:55.373446 2482 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:45:55.484966 kubelet[2482]: I0129 11:45:55.484895 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sllk6" podStartSLOduration=29.422471099 podStartE2EDuration="39.484878476s" podCreationTimestamp="2025-01-29 11:45:16 +0000 UTC" firstStartedPulling="2025-01-29 11:45:44.547066711 +0000 UTC m=+40.399881530" lastFinishedPulling="2025-01-29 11:45:54.609474088 +0000 UTC m=+50.462288907" observedRunningTime="2025-01-29 11:45:55.483852788 +0000 UTC m=+51.336667607" watchObservedRunningTime="2025-01-29 11:45:55.484878476 +0000 UTC m=+51.337693296" Jan 29 11:45:59.503415 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:55414.service - OpenSSH per-connection server daemon (10.0.0.1:55414). Jan 29 11:45:59.542016 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 55414 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:45:59.543523 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:45:59.547576 systemd-logind[1452]: New session 15 of user core. Jan 29 11:45:59.560050 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:45:59.681758 sshd[5179]: pam_unix(sshd:session): session closed for user core Jan 29 11:45:59.687086 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:55414.service: Deactivated successfully. Jan 29 11:45:59.689059 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:45:59.689664 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:45:59.690537 systemd-logind[1452]: Removed session 15. Jan 29 11:46:00.479225 kubelet[2482]: E0129 11:46:00.479189 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:46:04.286049 containerd[1466]: time="2025-01-29T11:46:04.285948759Z" level=info msg="StopPodSandbox for \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\"" Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.321 [WARNING][5228] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9e8e6d53-1dde-47b7-be75-dd444d38411e", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667", Pod:"coredns-6f6b679f8f-8dvxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibade9da93ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.321 [INFO][5228] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.321 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" iface="eth0" netns="" Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.321 [INFO][5228] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.321 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.343 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.343 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.344 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.349 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.349 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.351 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.356634 containerd[1466]: 2025-01-29 11:46:04.353 [INFO][5228] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:46:04.357352 containerd[1466]: time="2025-01-29T11:46:04.356676120Z" level=info msg="TearDown network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\" successfully" Jan 29 11:46:04.357352 containerd[1466]: time="2025-01-29T11:46:04.356697211Z" level=info msg="StopPodSandbox for \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\" returns successfully" Jan 29 11:46:04.363744 containerd[1466]: time="2025-01-29T11:46:04.363712302Z" level=info msg="RemovePodSandbox for \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\"" Jan 29 11:46:04.365876 containerd[1466]: time="2025-01-29T11:46:04.365845799Z" level=info msg="Forcibly stopping sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\"" Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.397 [WARNING][5260] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9e8e6d53-1dde-47b7-be75-dd444d38411e", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e8f9d688bc6a2727e6900bf85027c1f7f7f075fa2e65e717486c40e9952f667", Pod:"coredns-6f6b679f8f-8dvxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibade9da93ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.397 [INFO][5260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.397 [INFO][5260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" iface="eth0" netns="" Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.397 [INFO][5260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.397 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.415 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.415 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.415 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.420 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.420 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" HandleID="k8s-pod-network.8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Workload="localhost-k8s-coredns--6f6b679f8f--8dvxp-eth0" Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.421 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.425203 containerd[1466]: 2025-01-29 11:46:04.423 [INFO][5260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551" Jan 29 11:46:04.425645 containerd[1466]: time="2025-01-29T11:46:04.425206693Z" level=info msg="TearDown network for sandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\" successfully" Jan 29 11:46:04.434631 containerd[1466]: time="2025-01-29T11:46:04.434594867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:46:04.434875 containerd[1466]: time="2025-01-29T11:46:04.434655161Z" level=info msg="RemovePodSandbox \"8ac650492dfd486c7c460f7f606fcf1999ffd3734f82e59bdea3450d6e603551\" returns successfully" Jan 29 11:46:04.435236 containerd[1466]: time="2025-01-29T11:46:04.435212469Z" level=info msg="StopPodSandbox for \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\"" Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.466 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--27tjt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8232d851-127c-45d7-b458-e6bfcdd82418", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470", Pod:"coredns-6f6b679f8f-27tjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali699fe438aac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.466 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.466 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" iface="eth0" netns="" Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.466 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.466 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.484 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.484 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.484 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.489 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.489 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.490 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.494855 containerd[1466]: 2025-01-29 11:46:04.492 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:46:04.495469 containerd[1466]: time="2025-01-29T11:46:04.494865747Z" level=info msg="TearDown network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\" successfully" Jan 29 11:46:04.495469 containerd[1466]: time="2025-01-29T11:46:04.494890213Z" level=info msg="StopPodSandbox for \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\" returns successfully" Jan 29 11:46:04.495469 containerd[1466]: time="2025-01-29T11:46:04.495182718Z" level=info msg="RemovePodSandbox for \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\"" Jan 29 11:46:04.495469 containerd[1466]: time="2025-01-29T11:46:04.495205170Z" level=info msg="Forcibly stopping sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\"" Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.527 [WARNING][5319] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--27tjt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8232d851-127c-45d7-b458-e6bfcdd82418", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4d34e97d7c5b555ba6bf14cbcebf628a685cfddfeba855dcd37410ace8af470", Pod:"coredns-6f6b679f8f-27tjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali699fe438aac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.527 [INFO][5319] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.527 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" iface="eth0" netns="" Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.527 [INFO][5319] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.527 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.545 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.546 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.546 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.552 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.552 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" HandleID="k8s-pod-network.90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Workload="localhost-k8s-coredns--6f6b679f8f--27tjt-eth0" Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.553 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.557255 containerd[1466]: 2025-01-29 11:46:04.555 [INFO][5319] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db" Jan 29 11:46:04.557255 containerd[1466]: time="2025-01-29T11:46:04.557207656Z" level=info msg="TearDown network for sandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\" successfully" Jan 29 11:46:04.562113 containerd[1466]: time="2025-01-29T11:46:04.562087245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:46:04.562187 containerd[1466]: time="2025-01-29T11:46:04.562148121Z" level=info msg="RemovePodSandbox \"90c4a827c408b18e396122ff425bd7f6164ccdb61f60cf48cdb819bc145949db\" returns successfully" Jan 29 11:46:04.565059 containerd[1466]: time="2025-01-29T11:46:04.565027184Z" level=info msg="StopPodSandbox for \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\"" Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.595 [WARNING][5349] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sllk6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f43cd9c6-970c-4688-9f00-2800e91cf652", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01", Pod:"csi-node-driver-sllk6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54601cd9cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.595 [INFO][5349] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.595 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" iface="eth0" netns="" Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.595 [INFO][5349] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.596 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.614 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.614 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.614 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.620 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.620 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.623 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.627763 containerd[1466]: 2025-01-29 11:46:04.625 [INFO][5349] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:46:04.628360 containerd[1466]: time="2025-01-29T11:46:04.627782498Z" level=info msg="TearDown network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\" successfully" Jan 29 11:46:04.628360 containerd[1466]: time="2025-01-29T11:46:04.627814348Z" level=info msg="StopPodSandbox for \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\" returns successfully" Jan 29 11:46:04.628426 containerd[1466]: time="2025-01-29T11:46:04.628354203Z" level=info msg="RemovePodSandbox for \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\"" Jan 29 11:46:04.628426 containerd[1466]: time="2025-01-29T11:46:04.628385442Z" level=info msg="Forcibly stopping sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\"" Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.659 [WARNING][5384] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sllk6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f43cd9c6-970c-4688-9f00-2800e91cf652", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5e317636d151465bdb5558681f8943f69a50df90a3c296b33a5b98c35cf4b01", Pod:"csi-node-driver-sllk6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54601cd9cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.659 [INFO][5384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.659 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" iface="eth0" netns="" Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.659 [INFO][5384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.659 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.677 [INFO][5391] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.677 [INFO][5391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.677 [INFO][5391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.681 [WARNING][5391] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.681 [INFO][5391] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" HandleID="k8s-pod-network.e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Workload="localhost-k8s-csi--node--driver--sllk6-eth0" Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.683 [INFO][5391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.687226 containerd[1466]: 2025-01-29 11:46:04.685 [INFO][5384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e" Jan 29 11:46:04.687734 containerd[1466]: time="2025-01-29T11:46:04.687282115Z" level=info msg="TearDown network for sandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\" successfully" Jan 29 11:46:04.691183 containerd[1466]: time="2025-01-29T11:46:04.691138372Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:46:04.691259 containerd[1466]: time="2025-01-29T11:46:04.691189399Z" level=info msg="RemovePodSandbox \"e36f8f84964ddbe9a7d3c3c26b9187c1f44f6b3f8033bf2c74084074544d1f4e\" returns successfully" Jan 29 11:46:04.692095 containerd[1466]: time="2025-01-29T11:46:04.691781311Z" level=info msg="StopPodSandbox for \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\"" Jan 29 11:46:04.693086 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:51524.service - OpenSSH per-connection server daemon (10.0.0.1:51524). Jan 29 11:46:04.737794 sshd[5401]: Accepted publickey for core from 10.0.0.1 port 51524 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:04.740046 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:04.744997 systemd-logind[1452]: New session 16 of user core. Jan 29 11:46:04.755054 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.726 [WARNING][5416] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0", GenerateName:"calico-apiserver-69f5d4f59b-", Namespace:"calico-apiserver", SelfLink:"", UID:"341ae40d-b2cd-48be-89df-3aae61760d67", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f5d4f59b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602", Pod:"calico-apiserver-69f5d4f59b-p5w8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60a9e6a1834", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.726 [INFO][5416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.726 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" iface="eth0" netns="" Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.726 [INFO][5416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.726 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.749 [INFO][5426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.749 [INFO][5426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.749 [INFO][5426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.755 [WARNING][5426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.755 [INFO][5426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.757 [INFO][5426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.761769 containerd[1466]: 2025-01-29 11:46:04.759 [INFO][5416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:46:04.762399 containerd[1466]: time="2025-01-29T11:46:04.761807773Z" level=info msg="TearDown network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\" successfully" Jan 29 11:46:04.762399 containerd[1466]: time="2025-01-29T11:46:04.761833491Z" level=info msg="StopPodSandbox for \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\" returns successfully" Jan 29 11:46:04.762399 containerd[1466]: time="2025-01-29T11:46:04.762375990Z" level=info msg="RemovePodSandbox for \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\"" Jan 29 11:46:04.762467 containerd[1466]: time="2025-01-29T11:46:04.762403553Z" level=info msg="Forcibly stopping sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\"" Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.796 [WARNING][5449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0", GenerateName:"calico-apiserver-69f5d4f59b-", Namespace:"calico-apiserver", SelfLink:"", UID:"341ae40d-b2cd-48be-89df-3aae61760d67", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f5d4f59b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf0f3464b8bee1bfa3970315667461b2f2e6b28abef8cf45068816186a732602", Pod:"calico-apiserver-69f5d4f59b-p5w8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60a9e6a1834", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.796 [INFO][5449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.796 [INFO][5449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" iface="eth0" netns="" Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.796 [INFO][5449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.796 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.823 [INFO][5456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.823 [INFO][5456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.823 [INFO][5456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.829 [WARNING][5456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.829 [INFO][5456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" HandleID="k8s-pod-network.738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--p5w8p-eth0" Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.831 [INFO][5456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.836570 containerd[1466]: 2025-01-29 11:46:04.833 [INFO][5449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea" Jan 29 11:46:04.836570 containerd[1466]: time="2025-01-29T11:46:04.836522319Z" level=info msg="TearDown network for sandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\" successfully" Jan 29 11:46:04.841807 containerd[1466]: time="2025-01-29T11:46:04.841766349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:46:04.841964 containerd[1466]: time="2025-01-29T11:46:04.841845529Z" level=info msg="RemovePodSandbox \"738c8ef22b8cd7b7cc5b8f5490bcc3fafdcfd80887703f7440177cf5a2cc48ea\" returns successfully" Jan 29 11:46:04.842455 containerd[1466]: time="2025-01-29T11:46:04.842425891Z" level=info msg="StopPodSandbox for \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\"" Jan 29 11:46:04.892398 sshd[5401]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:04.897059 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:51524.service: Deactivated successfully. Jan 29 11:46:04.901038 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:46:04.901938 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:46:04.903148 systemd-logind[1452]: Removed session 16. Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.884 [WARNING][5487] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0", GenerateName:"calico-apiserver-69f5d4f59b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e97ff18c-9ca5-474c-b893-4e67487f341c", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f5d4f59b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d", Pod:"calico-apiserver-69f5d4f59b-9dw6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid479894893e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.885 [INFO][5487] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.885 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" iface="eth0" netns="" Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.885 [INFO][5487] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.885 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.909 [INFO][5495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.909 [INFO][5495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.909 [INFO][5495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.914 [WARNING][5495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.914 [INFO][5495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.916 [INFO][5495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.920985 containerd[1466]: 2025-01-29 11:46:04.918 [INFO][5487] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:46:04.921402 containerd[1466]: time="2025-01-29T11:46:04.921006562Z" level=info msg="TearDown network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\" successfully" Jan 29 11:46:04.921402 containerd[1466]: time="2025-01-29T11:46:04.921032091Z" level=info msg="StopPodSandbox for \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\" returns successfully" Jan 29 11:46:04.921619 containerd[1466]: time="2025-01-29T11:46:04.921594849Z" level=info msg="RemovePodSandbox for \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\"" Jan 29 11:46:04.921774 containerd[1466]: time="2025-01-29T11:46:04.921623362Z" level=info msg="Forcibly stopping sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\"" Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.960 [WARNING][5520] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0", GenerateName:"calico-apiserver-69f5d4f59b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e97ff18c-9ca5-474c-b893-4e67487f341c", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69f5d4f59b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"212f5395376154e56037d512d15e2d67dedbd750096e505df61b7699841a598d", Pod:"calico-apiserver-69f5d4f59b-9dw6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid479894893e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.960 [INFO][5520] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.960 [INFO][5520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" iface="eth0" netns="" Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.960 [INFO][5520] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.960 [INFO][5520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.980 [INFO][5527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.980 [INFO][5527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.980 [INFO][5527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.986 [WARNING][5527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.986 [INFO][5527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" HandleID="k8s-pod-network.4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Workload="localhost-k8s-calico--apiserver--69f5d4f59b--9dw6n-eth0" Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.987 [INFO][5527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:04.992952 containerd[1466]: 2025-01-29 11:46:04.990 [INFO][5520] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9" Jan 29 11:46:04.993365 containerd[1466]: time="2025-01-29T11:46:04.993014142Z" level=info msg="TearDown network for sandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\" successfully" Jan 29 11:46:04.998690 containerd[1466]: time="2025-01-29T11:46:04.998620249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:46:04.998749 containerd[1466]: time="2025-01-29T11:46:04.998727533Z" level=info msg="RemovePodSandbox \"4444eeae14a3ba2a1842c969b679aaf99a8c0b0ab88f1de2432023c70a8fa9c9\" returns successfully" Jan 29 11:46:04.999322 containerd[1466]: time="2025-01-29T11:46:04.999290761Z" level=info msg="StopPodSandbox for \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\"" Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.038 [WARNING][5550] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0", GenerateName:"calico-kube-controllers-749bdc5899-", Namespace:"calico-system", SelfLink:"", UID:"3e8ec329-4c59-4738-98e7-f420cb51aefa", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749bdc5899", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf", Pod:"calico-kube-controllers-749bdc5899-6mcr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b90bcb0824", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.039 [INFO][5550] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.040 [INFO][5550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" iface="eth0" netns="" Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.040 [INFO][5550] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.040 [INFO][5550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.061 [INFO][5558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.061 [INFO][5558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.062 [INFO][5558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.067 [WARNING][5558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.067 [INFO][5558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.069 [INFO][5558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:05.074764 containerd[1466]: 2025-01-29 11:46:05.072 [INFO][5550] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:46:05.075380 containerd[1466]: time="2025-01-29T11:46:05.074807755Z" level=info msg="TearDown network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\" successfully" Jan 29 11:46:05.075380 containerd[1466]: time="2025-01-29T11:46:05.074834185Z" level=info msg="StopPodSandbox for \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\" returns successfully" Jan 29 11:46:05.075505 containerd[1466]: time="2025-01-29T11:46:05.075469581Z" level=info msg="RemovePodSandbox for \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\"" Jan 29 11:46:05.075558 containerd[1466]: time="2025-01-29T11:46:05.075510538Z" level=info msg="Forcibly stopping sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\"" Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.116 [WARNING][5580] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0", GenerateName:"calico-kube-controllers-749bdc5899-", Namespace:"calico-system", SelfLink:"", UID:"3e8ec329-4c59-4738-98e7-f420cb51aefa", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749bdc5899", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f27dbaa7bac3afb7d45ee0298499b29165328c3203e5dba04e4590b0d775edbf", Pod:"calico-kube-controllers-749bdc5899-6mcr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b90bcb0824", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.116 [INFO][5580] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.116 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" iface="eth0" netns="" Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.116 [INFO][5580] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.116 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.140 [INFO][5588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.140 [INFO][5588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.140 [INFO][5588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.145 [WARNING][5588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.145 [INFO][5588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" HandleID="k8s-pod-network.41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Workload="localhost-k8s-calico--kube--controllers--749bdc5899--6mcr2-eth0" Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.147 [INFO][5588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:46:05.151807 containerd[1466]: 2025-01-29 11:46:05.149 [INFO][5580] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64" Jan 29 11:46:05.152345 containerd[1466]: time="2025-01-29T11:46:05.151838189Z" level=info msg="TearDown network for sandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\" successfully" Jan 29 11:46:05.156099 containerd[1466]: time="2025-01-29T11:46:05.156022167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:46:05.156099 containerd[1466]: time="2025-01-29T11:46:05.156081119Z" level=info msg="RemovePodSandbox \"41b091a5199a11886427f2da0fdeca1cbb3d52e33ad2eea7f1a445e826067a64\" returns successfully" Jan 29 11:46:09.909143 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:51526.service - OpenSSH per-connection server daemon (10.0.0.1:51526). Jan 29 11:46:09.945725 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 51526 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:09.947552 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:09.951655 systemd-logind[1452]: New session 17 of user core. Jan 29 11:46:09.957077 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:46:10.074153 sshd[5599]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:10.081978 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:51526.service: Deactivated successfully. Jan 29 11:46:10.083907 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:46:10.085285 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:46:10.097203 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:51530.service - OpenSSH per-connection server daemon (10.0.0.1:51530). Jan 29 11:46:10.098139 systemd-logind[1452]: Removed session 17. Jan 29 11:46:10.130320 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 51530 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:10.131837 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:10.136348 systemd-logind[1452]: New session 18 of user core. Jan 29 11:46:10.149050 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:46:10.761475 sshd[5614]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:10.770994 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:51530.service: Deactivated successfully. Jan 29 11:46:10.772976 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:46:10.774711 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:46:10.776056 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:51544.service - OpenSSH per-connection server daemon (10.0.0.1:51544). Jan 29 11:46:10.776801 systemd-logind[1452]: Removed session 18. Jan 29 11:46:10.813794 sshd[5648]: Accepted publickey for core from 10.0.0.1 port 51544 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:10.815364 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:10.819377 systemd-logind[1452]: New session 19 of user core. Jan 29 11:46:10.830051 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:46:12.600635 sshd[5648]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:12.613197 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:51544.service: Deactivated successfully. Jan 29 11:46:12.616974 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:46:12.619054 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:46:12.627372 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:59048.service - OpenSSH per-connection server daemon (10.0.0.1:59048). Jan 29 11:46:12.628700 systemd-logind[1452]: Removed session 19. Jan 29 11:46:12.665723 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 59048 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:12.667710 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:12.672560 systemd-logind[1452]: New session 20 of user core. Jan 29 11:46:12.685182 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:46:12.933529 sshd[5672]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:12.943293 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:59048.service: Deactivated successfully. Jan 29 11:46:12.945745 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:46:12.947895 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:46:12.956480 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:59064.service - OpenSSH per-connection server daemon (10.0.0.1:59064). Jan 29 11:46:12.957777 systemd-logind[1452]: Removed session 20. Jan 29 11:46:12.987956 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 59064 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:12.989751 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:12.994629 systemd-logind[1452]: New session 21 of user core. Jan 29 11:46:13.005215 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:46:13.147039 sshd[5684]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:13.152034 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:59064.service: Deactivated successfully. Jan 29 11:46:13.154126 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:46:13.154826 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:46:13.156034 systemd-logind[1452]: Removed session 21. Jan 29 11:46:17.755722 kubelet[2482]: I0129 11:46:17.755658 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:46:18.157833 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:59076.service - OpenSSH per-connection server daemon (10.0.0.1:59076). Jan 29 11:46:18.192996 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 59076 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:18.194571 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:18.198350 systemd-logind[1452]: New session 22 of user core. Jan 29 11:46:18.207090 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:46:18.313931 sshd[5702]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:18.317347 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:59076.service: Deactivated successfully. Jan 29 11:46:18.319332 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:46:18.320089 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:46:18.320977 systemd-logind[1452]: Removed session 22. Jan 29 11:46:23.325666 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:59418.service - OpenSSH per-connection server daemon (10.0.0.1:59418). Jan 29 11:46:23.360757 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 59418 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:23.362321 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:23.366382 systemd-logind[1452]: New session 23 of user core. Jan 29 11:46:23.371058 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:46:23.476754 sshd[5739]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:23.481267 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:59418.service: Deactivated successfully. Jan 29 11:46:23.483166 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:46:23.483748 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:46:23.484662 systemd-logind[1452]: Removed session 23. Jan 29 11:46:28.302509 kubelet[2482]: E0129 11:46:28.302436 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:46:28.302509 kubelet[2482]: E0129 11:46:28.302492 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:46:28.488674 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:59430.service - OpenSSH per-connection server daemon (10.0.0.1:59430). Jan 29 11:46:28.525410 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 59430 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:28.527020 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:28.531054 systemd-logind[1452]: New session 24 of user core. Jan 29 11:46:28.540042 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:46:28.646367 sshd[5761]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:28.650490 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:59430.service: Deactivated successfully. Jan 29 11:46:28.652670 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:46:28.653561 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:46:28.654598 systemd-logind[1452]: Removed session 24. Jan 29 11:46:30.302871 kubelet[2482]: E0129 11:46:30.302810 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:46:33.658327 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:34366.service - OpenSSH per-connection server daemon (10.0.0.1:34366). Jan 29 11:46:33.697880 sshd[5798]: Accepted publickey for core from 10.0.0.1 port 34366 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:46:33.699442 sshd[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:46:33.703354 systemd-logind[1452]: New session 25 of user core. Jan 29 11:46:33.713045 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:46:33.826115 sshd[5798]: pam_unix(sshd:session): session closed for user core Jan 29 11:46:33.829382 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:34366.service: Deactivated successfully. Jan 29 11:46:33.831367 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:46:33.832964 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:46:33.833718 systemd-logind[1452]: Removed session 25.