Jan 30 13:50:50.879269 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:50:50.879290 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:50:50.879301 kernel: BIOS-provided physical RAM map: Jan 30 13:50:50.879307 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:50:50.879313 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:50:50.879320 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:50:50.879327 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:50:50.879333 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:50:50.879339 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 30 13:50:50.879345 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 30 13:50:50.879354 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 30 13:50:50.879360 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 30 13:50:50.879367 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 30 13:50:50.879373 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 30 13:50:50.879381 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 30 13:50:50.879387 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:50:50.879397 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 30 13:50:50.879403 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 30 13:50:50.879410 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:50:50.879416 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:50:50.879423 kernel: NX (Execute Disable) protection: active Jan 30 13:50:50.879429 kernel: APIC: Static calls initialized Jan 30 13:50:50.879436 kernel: efi: EFI v2.7 by EDK II Jan 30 13:50:50.879443 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 30 13:50:50.879449 kernel: SMBIOS 2.8 present. Jan 30 13:50:50.879456 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 30 13:50:50.879462 kernel: Hypervisor detected: KVM Jan 30 13:50:50.879471 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:50:50.879478 kernel: kvm-clock: using sched offset of 4546323102 cycles Jan 30 13:50:50.879485 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:50:50.879492 kernel: tsc: Detected 2794.750 MHz processor Jan 30 13:50:50.879499 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:50:50.879506 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:50:50.879513 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 30 13:50:50.879520 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:50:50.879527 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:50:50.879536 kernel: Using GB pages for direct mapping Jan 30 13:50:50.879543 kernel: Secure boot disabled Jan 30 13:50:50.879550 kernel: ACPI: Early table checksum verification disabled Jan 30 13:50:50.879557 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:50:50.879567 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:50:50.879575 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:50.879582 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:50.879591 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:50:50.879598 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:50.879605 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:50.879612 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:50.879620 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:50:50.879627 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:50:50.879634 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:50:50.879643 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:50:50.879650 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:50:50.879657 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:50:50.879664 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:50:50.879671 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:50:50.879678 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:50:50.879685 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:50:50.879692 kernel: No NUMA configuration found Jan 30 13:50:50.879699 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 30 13:50:50.879709 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 30 13:50:50.879716 kernel: Zone ranges: Jan 30 13:50:50.879723 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:50:50.879730 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 30 13:50:50.879737 kernel: Normal empty Jan 30 13:50:50.879744 kernel: Movable zone start for each node Jan 30 13:50:50.879751 kernel: Early memory node ranges Jan 30 13:50:50.879758 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:50:50.879765 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:50:50.879772 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:50:50.879781 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 30 13:50:50.879788 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 30 13:50:50.879795 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 30 13:50:50.879802 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 30 13:50:50.879809 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:50:50.879816 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:50:50.879823 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:50:50.879830 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:50:50.879837 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 30 13:50:50.879847 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 30 13:50:50.879854 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 30 13:50:50.879861 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:50:50.879868 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:50:50.879875 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:50:50.879882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:50:50.879889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:50:50.879896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:50:50.879903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:50:50.879912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:50:50.879919 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:50:50.879926 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:50:50.879933 kernel: TSC deadline timer available Jan 30 13:50:50.879940 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:50:50.879947 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:50:50.879954 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:50:50.879961 kernel: kvm-guest: setup PV sched yield Jan 30 13:50:50.879968 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:50:50.879974 kernel: Booting paravirtualized kernel on KVM Jan 30 13:50:50.879984 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:50:50.879991 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:50:50.879999 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:50:50.880006 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:50:50.880012 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:50:50.880019 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:50:50.880026 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:50:50.880034 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:50:50.880044 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:50:50.880051 kernel: random: crng init done Jan 30 13:50:50.880059 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:50:50.880066 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:50:50.880073 kernel: Fallback order for Node 0: 0 Jan 30 13:50:50.880080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 30 13:50:50.880087 kernel: Policy zone: DMA32 Jan 30 13:50:50.880094 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:50:50.880102 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171128K reserved, 0K cma-reserved) Jan 30 13:50:50.880111 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:50:50.880118 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:50:50.880125 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:50:50.880133 kernel: Dynamic Preempt: voluntary Jan 30 13:50:50.880147 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:50:50.880168 kernel: rcu: RCU event tracing is enabled. Jan 30 13:50:50.880176 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:50:50.880184 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:50:50.880191 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:50:50.880204 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:50:50.880212 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:50:50.880219 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:50:50.880229 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:50:50.880237 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:50:50.880244 kernel: Console: colour dummy device 80x25 Jan 30 13:50:50.880252 kernel: printk: console [ttyS0] enabled Jan 30 13:50:50.880259 kernel: ACPI: Core revision 20230628 Jan 30 13:50:50.880269 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:50:50.880276 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:50:50.880284 kernel: x2apic enabled Jan 30 13:50:50.880291 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:50:50.880299 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:50:50.880306 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:50:50.880314 kernel: kvm-guest: setup PV IPIs Jan 30 13:50:50.880321 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:50:50.880329 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:50:50.880339 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 30 13:50:50.880346 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:50:50.880353 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:50:50.880361 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:50:50.880368 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:50:50.880376 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:50:50.880383 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:50:50.880390 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:50:50.880398 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:50:50.880408 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:50:50.880415 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:50:50.880423 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:50:50.880430 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:50:50.880438 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:50:50.880446 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:50:50.880453 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:50:50.880461 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:50:50.880471 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:50:50.880478 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:50:50.880486 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:50:50.880493 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:50:50.880500 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:50:50.880508 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:50:50.880515 kernel: landlock: Up and running. Jan 30 13:50:50.880523 kernel: SELinux: Initializing. Jan 30 13:50:50.880530 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:50:50.880540 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:50:50.880547 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:50:50.880555 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:50:50.880562 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:50:50.880570 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:50:50.880577 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:50:50.880585 kernel: ... version: 0 Jan 30 13:50:50.880592 kernel: ... bit width: 48 Jan 30 13:50:50.880600 kernel: ... generic registers: 6 Jan 30 13:50:50.880609 kernel: ... value mask: 0000ffffffffffff Jan 30 13:50:50.880617 kernel: ... max period: 00007fffffffffff Jan 30 13:50:50.880624 kernel: ... fixed-purpose events: 0 Jan 30 13:50:50.880631 kernel: ... event mask: 000000000000003f Jan 30 13:50:50.880639 kernel: signal: max sigframe size: 1776 Jan 30 13:50:50.880646 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:50:50.880654 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:50:50.880661 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:50:50.880668 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:50:50.880678 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:50:50.880685 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:50:50.880693 kernel: smpboot: Max logical packages: 1 Jan 30 13:50:50.880700 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 30 13:50:50.880707 kernel: devtmpfs: initialized Jan 30 13:50:50.880715 kernel: x86/mm: Memory block size: 128MB Jan 30 13:50:50.880722 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:50:50.880730 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:50:50.880737 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 30 13:50:50.880747 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:50:50.880754 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:50:50.880762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:50:50.880769 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:50:50.880777 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:50:50.880784 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:50:50.880792 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:50:50.880799 kernel: audit: type=2000 audit(1738245049.985:1): state=initialized audit_enabled=0 res=1 Jan 30 13:50:50.880806 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:50:50.880816 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:50:50.880824 kernel: cpuidle: using governor menu Jan 30 13:50:50.880831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:50:50.880838 kernel: dca service started, version 1.12.1 Jan 30 13:50:50.880846 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:50:50.880853 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:50:50.880861 kernel: PCI: Using configuration type 1 for base access Jan 30 13:50:50.880868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:50:50.880876 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:50:50.880885 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:50:50.880893 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:50:50.880900 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:50:50.880908 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:50:50.880915 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:50:50.880923 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:50:50.880930 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:50:50.880937 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:50:50.882024 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:50:50.882036 kernel: ACPI: Interpreter enabled Jan 30 13:50:50.882043 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:50:50.882050 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:50:50.882058 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:50:50.882066 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:50:50.882073 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:50:50.882080 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:50:50.882269 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:50:50.882401 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:50:50.882521 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:50:50.882531 kernel: PCI host bridge to bus 0000:00 Jan 30 13:50:50.882652 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:50:50.882763 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:50:50.882875 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:50:50.882982 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:50:50.883094 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:50:50.883233 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 30 13:50:50.883347 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:50:50.883489 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:50:50.883620 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:50:50.883741 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:50:50.883865 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:50:50.883983 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:50:50.884102 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:50:50.884244 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:50:50.884375 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:50:50.884496 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:50:50.884617 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:50:50.884742 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 30 13:50:50.885955 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:50:50.886081 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:50:50.886222 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:50:50.886343 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 30 13:50:50.886472 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:50:50.886593 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:50:50.886718 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:50:50.886838 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 30 13:50:50.886957 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:50:50.887088 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:50:50.887851 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:50:50.887993 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:50:50.888119 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:50:50.888288 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:50:50.888421 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:50:50.888541 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:50:50.888550 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:50:50.888558 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:50:50.888566 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:50:50.888574 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:50:50.888585 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:50:50.888592 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:50:50.888600 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:50:50.888608 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:50:50.888615 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:50:50.888623 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:50:50.888631 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:50:50.888638 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:50:50.888646 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:50:50.888656 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:50:50.888664 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:50:50.888671 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:50:50.888678 kernel: iommu: Default domain type: Translated Jan 30 13:50:50.888686 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:50:50.888694 kernel: efivars: Registered efivars operations Jan 30 13:50:50.888701 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:50:50.888709 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:50:50.888717 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:50:50.888727 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 30 13:50:50.888735 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 30 13:50:50.888742 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 30 13:50:50.888861 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:50:50.888978 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:50:50.889096 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:50:50.889106 kernel: vgaarb: loaded Jan 30 13:50:50.889113 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:50:50.889121 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:50:50.889132 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:50:50.889140 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:50:50.889148 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:50:50.889167 kernel: pnp: PnP ACPI init Jan 30 13:50:50.889311 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:50:50.889322 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:50:50.889330 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:50:50.889338 kernel: NET: Registered PF_INET protocol family Jan 30 13:50:50.889349 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:50:50.889357 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:50:50.889365 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:50:50.889373 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:50:50.889380 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:50:50.889388 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:50:50.889396 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:50:50.889404 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:50:50.889412 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:50:50.889422 kernel: NET: Registered PF_XDP protocol family Jan 30 13:50:50.889545 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:50:50.889667 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:50:50.889778 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:50:50.889888 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:50:50.889997 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:50:50.890108 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:50:50.890240 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:50:50.890357 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 30 13:50:50.890367 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:50:50.890375 kernel: Initialise system trusted keyrings Jan 30 13:50:50.890383 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:50:50.890390 kernel: Key type asymmetric registered Jan 30 13:50:50.890398 kernel: Asymmetric key parser 'x509' registered Jan 30 13:50:50.890405 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:50:50.890413 kernel: io scheduler mq-deadline registered Jan 30 13:50:50.890424 kernel: io scheduler kyber registered Jan 30 13:50:50.890432 kernel: io scheduler bfq registered Jan 30 13:50:50.890439 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:50:50.890447 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:50:50.890455 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:50:50.890463 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:50:50.890471 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:50:50.890478 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:50:50.890486 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:50:50.890494 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:50:50.890504 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:50:50.890627 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:50:50.890639 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:50:50.890754 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:50:50.890865 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:50:50 UTC (1738245050) Jan 30 13:50:50.890978 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:50:50.890987 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:50:50.890998 kernel: efifb: probing for efifb Jan 30 13:50:50.891006 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 30 13:50:50.891014 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 30 13:50:50.891021 kernel: efifb: scrolling: redraw Jan 30 13:50:50.891029 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 30 13:50:50.891037 kernel: Console: switching to colour frame buffer device 100x37 Jan 30 13:50:50.891063 kernel: fb0: EFI VGA frame buffer device Jan 30 13:50:50.891073 kernel: pstore: Using crash dump compression: deflate Jan 30 13:50:50.891080 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:50:50.891090 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:50:50.891098 kernel: Segment Routing with IPv6 Jan 30 13:50:50.891106 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:50:50.891114 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:50:50.891121 kernel: Key type dns_resolver registered Jan 30 13:50:50.891129 kernel: IPI shorthand broadcast: enabled Jan 30 13:50:50.891137 kernel: sched_clock: Marking stable (604002941, 117789545)->(740653836, -18861350) Jan 30 13:50:50.891145 kernel: registered taskstats version 1 Jan 30 13:50:50.891153 kernel: Loading compiled-in X.509 certificates Jan 30 13:50:50.891764 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:50:50.891777 kernel: Key type .fscrypt registered Jan 30 13:50:50.891786 kernel: Key type fscrypt-provisioning registered Jan 30 13:50:50.891794 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:50:50.891802 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:50:50.891810 kernel: ima: No architecture policies found Jan 30 13:50:50.891817 kernel: clk: Disabling unused clocks Jan 30 13:50:50.891825 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:50:50.891833 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:50:50.891844 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:50:50.891852 kernel: Run /init as init process Jan 30 13:50:50.891860 kernel: with arguments: Jan 30 13:50:50.891867 kernel: /init Jan 30 13:50:50.891875 kernel: with environment: Jan 30 13:50:50.891883 kernel: HOME=/ Jan 30 13:50:50.891893 kernel: TERM=linux Jan 30 13:50:50.891901 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:50:50.891911 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:50:50.891924 systemd[1]: Detected virtualization kvm. Jan 30 13:50:50.891933 systemd[1]: Detected architecture x86-64. Jan 30 13:50:50.891941 systemd[1]: Running in initrd. Jan 30 13:50:50.891951 systemd[1]: No hostname configured, using default hostname. Jan 30 13:50:50.891961 systemd[1]: Hostname set to . Jan 30 13:50:50.891970 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:50:50.891978 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:50:50.891987 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:50:50.891995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:50:50.892004 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:50:50.892013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:50:50.892022 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:50:50.892033 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:50:50.892043 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:50:50.892051 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:50:50.892060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:50:50.892068 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:50:50.892076 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:50:50.892085 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:50:50.892095 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:50:50.892104 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:50:50.892112 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:50:50.892120 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:50:50.892129 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:50:50.892138 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:50:50.892146 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:50:50.892154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:50:50.892204 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:50:50.892213 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:50:50.892222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:50:50.892230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:50:50.892239 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:50:50.892247 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:50:50.892255 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:50:50.892264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:50:50.892272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:50.892283 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:50:50.892291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:50:50.892299 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:50:50.892308 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:50:50.892341 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 13:50:50.892361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:50.892370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:50:50.892379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:50.892390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:50:50.892399 systemd-journald[192]: Journal started Jan 30 13:50:50.892418 systemd-journald[192]: Runtime Journal (/run/log/journal/be2edfca13894b3999d21aa56baec329) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:50:50.871750 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:50:50.895172 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:50:50.895486 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:50:50.898599 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:50:50.903529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:50:50.905190 kernel: Bridge firewalling registered Jan 30 13:50:50.905092 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:50:50.906557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:50:50.907857 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:50:50.912275 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:50:50.913765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:50.916836 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:50:50.923189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:50:50.924671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:50:50.932013 dracut-cmdline[224]: dracut-dracut-053 Jan 30 13:50:50.934895 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:50:50.958924 systemd-resolved[229]: Positive Trust Anchors: Jan 30 13:50:50.958942 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:50:50.958972 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:50:50.961535 systemd-resolved[229]: Defaulting to hostname 'linux'. Jan 30 13:50:50.962559 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:50:50.968749 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:50:51.016200 kernel: SCSI subsystem initialized Jan 30 13:50:51.026184 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:50:51.036184 kernel: iscsi: registered transport (tcp) Jan 30 13:50:51.056452 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:50:51.056476 kernel: QLogic iSCSI HBA Driver Jan 30 13:50:51.104100 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:50:51.111319 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:50:51.134451 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:50:51.134486 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:50:51.135474 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:50:51.176415 kernel: raid6: avx2x4 gen() 30672 MB/s Jan 30 13:50:51.193176 kernel: raid6: avx2x2 gen() 31483 MB/s Jan 30 13:50:51.210262 kernel: raid6: avx2x1 gen() 25510 MB/s Jan 30 13:50:51.210276 kernel: raid6: using algorithm avx2x2 gen() 31483 MB/s Jan 30 13:50:51.228267 kernel: raid6: .... xor() 19957 MB/s, rmw enabled Jan 30 13:50:51.228287 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:50:51.248177 kernel: xor: automatically using best checksumming function avx Jan 30 13:50:51.398196 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:50:51.411595 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:50:51.425317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:50:51.436556 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 30 13:50:51.441000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:50:51.448301 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:50:51.463947 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 30 13:50:51.496455 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:50:51.519343 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:50:51.580796 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:50:51.587320 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:50:51.603943 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:50:51.606840 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:50:51.609681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:50:51.612247 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:50:51.619246 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:50:51.638403 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:50:51.638430 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:50:51.638681 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:50:51.638701 kernel: GPT:9289727 != 19775487 Jan 30 13:50:51.638718 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:50:51.638734 kernel: GPT:9289727 != 19775487 Jan 30 13:50:51.638751 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:50:51.638766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:50:51.624559 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:50:51.635689 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:50:51.648206 kernel: libata version 3.00 loaded. Jan 30 13:50:51.650393 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:50:51.650413 kernel: AES CTR mode by8 optimization enabled Jan 30 13:50:51.652115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:50:51.652315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:51.656582 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:51.663552 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:50:51.699943 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:50:51.699973 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:50:51.700219 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:50:51.700424 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Jan 30 13:50:51.700463 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (476) Jan 30 13:50:51.700508 kernel: scsi host0: ahci Jan 30 13:50:51.700954 kernel: scsi host1: ahci Jan 30 13:50:51.701188 kernel: scsi host2: ahci Jan 30 13:50:51.701393 kernel: scsi host3: ahci Jan 30 13:50:51.701584 kernel: scsi host4: ahci Jan 30 13:50:51.702558 kernel: scsi host5: ahci Jan 30 13:50:51.702784 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Jan 30 13:50:51.702800 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Jan 30 13:50:51.702815 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Jan 30 13:50:51.702828 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Jan 30 13:50:51.702848 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Jan 30 13:50:51.702862 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Jan 30 13:50:51.658871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:50:51.659045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:51.663492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:51.671497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:51.703532 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:50:51.711028 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:50:51.716956 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:50:51.718243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:50:51.725933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:50:51.738388 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:50:51.741093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:50:51.742224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:51.744758 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:51.747650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:51.762737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:51.773365 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:51.974764 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:52.050111 disk-uuid[559]: Primary Header is updated. Jan 30 13:50:52.050111 disk-uuid[559]: Secondary Entries is updated. Jan 30 13:50:52.050111 disk-uuid[559]: Secondary Header is updated. Jan 30 13:50:52.054060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:50:52.057199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:50:52.063186 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:52.063208 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:52.065027 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:52.065193 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:52.068804 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:52.069904 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:50:52.069930 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:50:52.071677 kernel: ata3.00: applying bridge limits Jan 30 13:50:52.071697 kernel: ata3.00: configured for UDMA/100 Jan 30 13:50:52.072193 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:50:52.123215 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:50:52.139769 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:50:52.139785 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:50:53.059108 disk-uuid[574]: The operation has completed successfully. Jan 30 13:50:53.060326 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:50:53.087427 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:50:53.087551 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:50:53.111316 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:50:53.116539 sh[597]: Success Jan 30 13:50:53.129216 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:50:53.158447 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:50:53.177557 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:50:53.181976 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:50:53.192537 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:50:53.192563 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:53.192574 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:50:53.193558 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:50:53.194330 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:50:53.198716 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:50:53.200979 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:50:53.218272 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:50:53.220762 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:50:53.228480 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:53.228501 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:53.228511 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:50:53.231197 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:50:53.239833 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:50:53.241712 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:53.249928 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:50:53.257319 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:50:53.303277 ignition[689]: Ignition 2.19.0 Jan 30 13:50:53.303289 ignition[689]: Stage: fetch-offline Jan 30 13:50:53.303323 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:53.303333 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:53.303456 ignition[689]: parsed url from cmdline: "" Jan 30 13:50:53.303460 ignition[689]: no config URL provided Jan 30 13:50:53.303466 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:50:53.303475 ignition[689]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:50:53.303500 ignition[689]: op(1): [started] loading QEMU firmware config module Jan 30 13:50:53.303506 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:50:53.314082 ignition[689]: op(1): [finished] loading QEMU firmware config module Jan 30 13:50:53.338740 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:50:53.351346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:50:53.357176 ignition[689]: parsing config with SHA512: 4dcdfd6b2f9570b2075c42b5a7aec31a75eb8c9dd17191e1ab7bed9f282357ed1871d3d7a6642a4e58a8fdd4f3eb40483f2d0d4812dbaf0982f88285bdeb1a2b Jan 30 13:50:53.362510 unknown[689]: fetched base config from "system" Jan 30 13:50:53.362524 unknown[689]: fetched user config from "qemu" Jan 30 13:50:53.362920 ignition[689]: fetch-offline: fetch-offline passed Jan 30 13:50:53.362977 ignition[689]: Ignition finished successfully Jan 30 13:50:53.365462 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:50:53.373307 systemd-networkd[785]: lo: Link UP Jan 30 13:50:53.373317 systemd-networkd[785]: lo: Gained carrier Jan 30 13:50:53.374777 systemd-networkd[785]: Enumeration completed Jan 30 13:50:53.374850 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:50:53.375152 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:50:53.375156 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:53.376053 systemd-networkd[785]: eth0: Link UP Jan 30 13:50:53.376057 systemd-networkd[785]: eth0: Gained carrier Jan 30 13:50:53.376063 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:50:53.376352 systemd[1]: Reached target network.target - Network. Jan 30 13:50:53.377263 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:50:53.386304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:50:53.388197 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.158/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:50:53.398380 ignition[788]: Ignition 2.19.0 Jan 30 13:50:53.398391 ignition[788]: Stage: kargs Jan 30 13:50:53.398538 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:53.398549 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:53.399397 ignition[788]: kargs: kargs passed Jan 30 13:50:53.399439 ignition[788]: Ignition finished successfully Jan 30 13:50:53.402646 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:50:53.412519 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:50:53.424322 ignition[797]: Ignition 2.19.0 Jan 30 13:50:53.424332 ignition[797]: Stage: disks Jan 30 13:50:53.424511 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:53.424522 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:53.425446 ignition[797]: disks: disks passed Jan 30 13:50:53.427477 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:50:53.425488 ignition[797]: Ignition finished successfully Jan 30 13:50:53.428880 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:50:53.430464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:50:53.432622 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:50:53.433650 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:50:53.435413 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:50:53.448269 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:50:53.460375 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:50:53.466745 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:50:53.479254 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:50:53.562174 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:50:53.562471 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:50:53.564664 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:50:53.579247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:50:53.581176 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:50:53.581485 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:50:53.581521 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:50:53.589773 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Jan 30 13:50:53.581541 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:50:53.593732 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:53.593756 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:53.593766 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:50:53.595178 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:50:53.616494 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:50:53.621213 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:50:53.623964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:50:53.661554 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:50:53.666935 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:50:53.671025 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:50:53.674931 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:50:53.764505 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:50:53.771348 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:50:53.774371 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:50:53.781181 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:53.796833 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:50:53.801123 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:50:53.801123 ignition[929]: INFO : Stage: mount Jan 30 13:50:53.802892 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:53.802892 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:53.802892 ignition[929]: INFO : mount: mount passed Jan 30 13:50:53.802892 ignition[929]: INFO : Ignition finished successfully Jan 30 13:50:53.804692 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:50:53.817249 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:50:54.191808 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:50:54.201336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:50:54.207178 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Jan 30 13:50:54.209306 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:50:54.209327 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:54.209337 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:50:54.212177 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:50:54.213866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:50:54.243691 ignition[958]: INFO : Ignition 2.19.0 Jan 30 13:50:54.243691 ignition[958]: INFO : Stage: files Jan 30 13:50:54.245484 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:54.245484 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:54.245484 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:50:54.249265 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:50:54.249265 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:50:54.249265 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:50:54.249265 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:50:54.255089 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:50:54.255089 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:50:54.255089 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:50:54.255089 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:50:54.255089 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:50:54.249410 unknown[958]: wrote ssh authorized keys file for user: core Jan 30 13:50:54.414945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:50:54.601358 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:50:54.601358 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:50:54.605791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:50:54.721468 systemd-networkd[785]: eth0: Gained IPv6LL Jan 30 13:50:54.964646 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:50:55.351700 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:50:55.351700 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 30 13:50:55.355592 ignition[958]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:50:55.381565 ignition[958]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:50:55.386913 ignition[958]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:50:55.388604 ignition[958]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:50:55.388604 ignition[958]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:50:55.391323 ignition[958]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:50:55.392736 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:50:55.394491 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:50:55.396140 ignition[958]: INFO : files: files passed Jan 30 13:50:55.396873 ignition[958]: INFO : Ignition finished successfully Jan 30 13:50:55.398562 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:50:55.406372 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:50:55.408123 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:50:55.410542 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:50:55.410665 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:50:55.418347 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:50:55.421271 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:50:55.421271 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:50:55.424392 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:50:55.427581 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:50:55.428999 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:50:55.437322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:50:55.461318 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:50:55.461444 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:50:55.463739 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:50:55.465731 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:50:55.466752 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:50:55.470024 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:50:55.489362 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:50:55.496352 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:50:55.507273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:50:55.507420 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:50:55.510669 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:50:55.511798 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:50:55.511906 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:50:55.513769 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:50:55.514108 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:50:55.514599 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:50:55.514922 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:50:55.515426 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:50:55.515749 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:50:55.516074 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:50:55.516586 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:50:55.516907 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:50:55.517416 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:50:55.517718 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:50:55.517816 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:50:55.535937 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:50:55.537003 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:50:55.537471 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:50:55.537631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:50:55.541040 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:50:55.541155 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:50:55.545258 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:50:55.545362 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:50:55.548317 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:50:55.549282 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:50:55.553236 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:50:55.554572 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:50:55.556859 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:50:55.558666 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:50:55.558767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:50:55.560506 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:50:55.560593 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:50:55.562323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:50:55.562436 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:50:55.564406 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:50:55.564509 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:50:55.577299 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:50:55.578233 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:50:55.578348 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:50:55.581098 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:50:55.582130 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:50:55.582276 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:50:55.588535 ignition[1013]: INFO : Ignition 2.19.0 Jan 30 13:50:55.588535 ignition[1013]: INFO : Stage: umount Jan 30 13:50:55.588535 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:55.588535 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:50:55.584539 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:50:55.595555 ignition[1013]: INFO : umount: umount passed Jan 30 13:50:55.595555 ignition[1013]: INFO : Ignition finished successfully Jan 30 13:50:55.584735 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:50:55.589709 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:50:55.589857 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:50:55.592893 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:50:55.593014 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:50:55.596817 systemd[1]: Stopped target network.target - Network. Jan 30 13:50:55.598291 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:50:55.598349 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:50:55.600373 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:50:55.600420 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:50:55.602318 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:50:55.602364 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:50:55.604511 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:50:55.604558 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:50:55.606552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:50:55.608732 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:50:55.611429 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:50:55.615213 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 30 13:50:55.617243 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:50:55.617375 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:50:55.619632 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:50:55.619751 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:50:55.623241 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:50:55.623293 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:50:55.630279 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:50:55.631213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:50:55.631268 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:50:55.633618 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:50:55.633664 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:50:55.635661 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:50:55.635707 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:50:55.637766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:50:55.637814 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:50:55.640406 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:50:55.652625 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:50:55.652759 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:50:55.664008 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:50:55.665067 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:50:55.667732 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:50:55.667786 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:50:55.671132 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:50:55.671199 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:50:55.671429 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:50:55.671478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:50:55.672107 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:50:55.672151 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:50:55.672912 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:50:55.672957 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:55.692314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:50:55.702545 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:50:55.702617 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:50:55.705997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:50:55.706066 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:55.710134 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:50:55.710278 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:50:55.787193 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:50:55.787337 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:50:55.789327 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:50:55.791010 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:50:55.791062 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:50:55.805273 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:50:55.812793 systemd[1]: Switching root. Jan 30 13:50:55.839243 systemd-journald[192]: Journal stopped Jan 30 13:50:57.022832 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 13:50:57.022909 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:50:57.022931 kernel: SELinux: policy capability open_perms=1 Jan 30 13:50:57.022942 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:50:57.022953 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:50:57.022965 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:50:57.022980 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:50:57.022992 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:50:57.023003 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:50:57.023014 kernel: audit: type=1403 audit(1738245056.323:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:50:57.023027 systemd[1]: Successfully loaded SELinux policy in 38.632ms. Jan 30 13:50:57.023046 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.582ms. Jan 30 13:50:57.023067 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:50:57.023079 systemd[1]: Detected virtualization kvm. Jan 30 13:50:57.023093 systemd[1]: Detected architecture x86-64. Jan 30 13:50:57.023108 systemd[1]: Detected first boot. Jan 30 13:50:57.023119 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:50:57.023136 zram_generator::config[1074]: No configuration found. Jan 30 13:50:57.023150 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:50:57.023179 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:50:57.023195 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:50:57.023208 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:50:57.023220 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:50:57.023235 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:50:57.023247 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:50:57.023259 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:50:57.023272 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:50:57.023284 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:50:57.023296 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:50:57.023308 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:50:57.023321 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:50:57.023333 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:50:57.023347 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:50:57.023359 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:50:57.023372 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:50:57.023385 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:50:57.023397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:50:57.023409 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:50:57.023422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:50:57.023434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:50:57.023446 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:50:57.023461 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:50:57.023473 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:50:57.023485 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:50:57.023497 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:50:57.023510 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:50:57.023522 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:50:57.023533 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:50:57.023545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:50:57.023561 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:50:57.023576 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:50:57.023592 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:50:57.023604 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:50:57.023616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:57.023628 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:50:57.023640 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:50:57.023653 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:50:57.023668 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:50:57.023680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:50:57.023693 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:50:57.023705 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:50:57.023716 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:50:57.023728 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:50:57.023741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:50:57.023753 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:50:57.023764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:50:57.023780 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:50:57.023792 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:50:57.023805 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:50:57.023817 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:50:57.023828 kernel: fuse: init (API version 7.39) Jan 30 13:50:57.023840 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:50:57.023852 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:50:57.023864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:50:57.023878 kernel: loop: module loaded Jan 30 13:50:57.023889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:50:57.023902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:57.023932 systemd-journald[1159]: Collecting audit messages is disabled. Jan 30 13:50:57.023961 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:50:57.023973 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:50:57.023986 systemd-journald[1159]: Journal started Jan 30 13:50:57.024010 systemd-journald[1159]: Runtime Journal (/run/log/journal/be2edfca13894b3999d21aa56baec329) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:50:57.029058 kernel: ACPI: bus type drm_connector registered Jan 30 13:50:57.029089 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:50:57.030843 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:50:57.032068 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:50:57.033407 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:50:57.034722 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:50:57.036080 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:50:57.038088 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:50:57.039694 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:50:57.039908 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:50:57.041413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:50:57.041624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:50:57.043099 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:50:57.043333 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:50:57.045082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:50:57.045317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:50:57.047108 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:50:57.047349 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:50:57.048772 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:50:57.049045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:50:57.050564 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:50:57.052315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:50:57.054002 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:50:57.070485 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:50:57.085229 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:50:57.087650 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:50:57.088828 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:50:57.091955 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:50:57.095449 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:50:57.096798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:50:57.097998 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:50:57.099370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:50:57.101534 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:50:57.110283 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:50:57.114585 systemd-journald[1159]: Time spent on flushing to /var/log/journal/be2edfca13894b3999d21aa56baec329 is 16.025ms for 983 entries. Jan 30 13:50:57.114585 systemd-journald[1159]: System Journal (/var/log/journal/be2edfca13894b3999d21aa56baec329) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:50:57.141121 systemd-journald[1159]: Received client request to flush runtime journal. Jan 30 13:50:57.113055 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:50:57.115681 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:50:57.122633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:50:57.125753 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:50:57.129096 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:50:57.132297 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:50:57.139007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:50:57.148912 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:50:57.150832 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 30 13:50:57.150845 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 30 13:50:57.157835 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:50:57.170353 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:50:57.171652 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:50:57.193773 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:50:57.202339 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:50:57.220226 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 30 13:50:57.220247 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 30 13:50:57.225859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:50:57.662971 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:50:57.673303 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:50:57.696595 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jan 30 13:50:57.711817 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:50:57.725373 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:50:57.731185 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:50:57.748412 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1254) Jan 30 13:50:57.750953 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:50:57.792422 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:50:57.801698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:50:57.812224 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:50:57.818212 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:50:57.831375 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:50:57.838516 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:50:57.845630 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:50:57.845860 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:50:57.861865 systemd-networkd[1246]: lo: Link UP Jan 30 13:50:57.871287 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:50:57.861877 systemd-networkd[1246]: lo: Gained carrier Jan 30 13:50:57.872085 systemd-networkd[1246]: Enumeration completed Jan 30 13:50:57.872625 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:50:57.872630 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:57.874412 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:50:57.874755 systemd-networkd[1246]: eth0: Link UP Jan 30 13:50:57.874838 systemd-networkd[1246]: eth0: Gained carrier Jan 30 13:50:57.874887 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:50:57.881186 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:50:57.888213 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.158/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:50:57.889453 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:50:57.897515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:57.909736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:50:57.910052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:57.912820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:57.955258 kernel: kvm_amd: TSC scaling supported Jan 30 13:50:57.955321 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:50:57.955335 kernel: kvm_amd: Nested Paging enabled Jan 30 13:50:57.956474 kernel: kvm_amd: LBR virtualization supported Jan 30 13:50:57.956497 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:50:57.957644 kernel: kvm_amd: Virtual GIF supported Jan 30 13:50:57.978188 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:50:57.985502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:58.010479 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:50:58.024281 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:50:58.032581 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:50:58.067507 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:50:58.069038 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:50:58.080283 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:50:58.084802 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:50:58.121155 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:50:58.122675 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:50:58.124002 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:50:58.124038 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:50:58.125182 systemd[1]: Reached target machines.target - Containers. Jan 30 13:50:58.127336 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:50:58.138302 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:50:58.140734 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:50:58.142040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:50:58.143033 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:50:58.146077 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:50:58.149407 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:50:58.152470 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:50:58.158775 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:50:58.167185 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:50:58.179575 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:50:58.180554 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:50:58.190177 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:50:58.216188 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:50:58.244186 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 13:50:58.275184 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 13:50:58.284185 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:50:58.293193 kernel: loop5: detected capacity change from 0 to 210664 Jan 30 13:50:58.299198 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:50:58.299801 (sd-merge)[1313]: Merged extensions into '/usr'. Jan 30 13:50:58.303967 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:50:58.303985 systemd[1]: Reloading... Jan 30 13:50:58.344266 zram_generator::config[1339]: No configuration found. Jan 30 13:50:58.398402 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:50:58.482279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:50:58.546663 systemd[1]: Reloading finished in 242 ms. Jan 30 13:50:58.564058 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:50:58.565635 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:50:58.576579 systemd[1]: Starting ensure-sysext.service... Jan 30 13:50:58.578941 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:50:58.582419 systemd[1]: Reloading requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:50:58.582434 systemd[1]: Reloading... Jan 30 13:50:58.601410 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:50:58.601771 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:50:58.602732 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:50:58.603027 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jan 30 13:50:58.603108 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jan 30 13:50:58.606318 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:50:58.606331 systemd-tmpfiles[1386]: Skipping /boot Jan 30 13:50:58.619844 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:50:58.619859 systemd-tmpfiles[1386]: Skipping /boot Jan 30 13:50:58.632185 zram_generator::config[1414]: No configuration found. Jan 30 13:50:58.745460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:50:58.809333 systemd[1]: Reloading finished in 226 ms. Jan 30 13:50:58.828928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:50:58.844711 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:50:58.847295 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:50:58.849838 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:50:58.853968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:50:58.856814 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:50:58.865571 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:58.866147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:50:58.869444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:50:58.873920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:50:58.879372 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:50:58.883348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:50:58.885120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:50:58.886846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:50:58.890529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:50:58.890755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:50:58.893014 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:50:58.893533 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:50:58.895709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:50:58.895943 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:50:58.899551 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:50:58.899822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:50:58.901635 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:50:58.908418 augenrules[1489]: No rules Jan 30 13:50:58.910382 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:50:58.916739 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:50:58.918725 systemd[1]: Finished ensure-sysext.service. Jan 30 13:50:58.923192 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:50:58.923262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:50:58.931366 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:50:58.934100 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:50:58.935999 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:50:58.938647 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:50:58.948335 systemd-resolved[1463]: Positive Trust Anchors: Jan 30 13:50:58.948350 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:50:58.948383 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:50:58.949060 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:50:58.952648 systemd-resolved[1463]: Defaulting to hostname 'linux'. Jan 30 13:50:58.954645 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:50:58.955874 systemd[1]: Reached target network.target - Network. Jan 30 13:50:58.956801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:50:59.008993 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:50:59.010398 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:50:59.011593 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:50:59.012963 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:50:59.014307 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:50:59.015594 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:50:59.015619 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:50:59.016537 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:50:59.017576 systemd-timesyncd[1506]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:50:59.017639 systemd-timesyncd[1506]: Initial clock synchronization to Thu 2025-01-30 13:50:59.103859 UTC. Jan 30 13:50:59.017734 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:50:59.018972 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:50:59.020241 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:50:59.021703 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:50:59.024516 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:50:59.026840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:50:59.046480 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:50:59.047602 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:50:59.048572 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:50:59.049667 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:50:59.049701 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:50:59.049724 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:50:59.050863 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:50:59.053052 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:50:59.057237 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:50:59.060895 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:50:59.062804 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:50:59.063947 jq[1519]: false Jan 30 13:50:59.064116 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:50:59.067343 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:50:59.071704 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:50:59.075977 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:50:59.080743 extend-filesystems[1521]: Found loop3 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found loop4 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found loop5 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found sr0 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found vda Jan 30 13:50:59.080743 extend-filesystems[1521]: Found vda1 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found vda2 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found vda3 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found usr Jan 30 13:50:59.080743 extend-filesystems[1521]: Found vda4 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found vda6 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found vda7 Jan 30 13:50:59.080743 extend-filesystems[1521]: Found vda9 Jan 30 13:50:59.080743 extend-filesystems[1521]: Checking size of /dev/vda9 Jan 30 13:50:59.101731 extend-filesystems[1521]: Resized partition /dev/vda9 Jan 30 13:50:59.086351 dbus-daemon[1518]: [system] SELinux support is enabled Jan 30 13:50:59.081763 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:50:59.083835 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:50:59.086412 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:50:59.090296 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:50:59.092500 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:50:59.103582 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:50:59.103905 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:50:59.107630 extend-filesystems[1543]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:50:59.108560 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:50:59.108866 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:50:59.114773 jq[1535]: true Jan 30 13:50:59.115301 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:50:59.123416 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:50:59.123768 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:50:59.124270 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:50:59.129188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1252) Jan 30 13:50:59.131412 jq[1553]: true Jan 30 13:50:59.133621 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:50:59.133652 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:50:59.137256 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:50:59.137276 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:50:59.149735 update_engine[1534]: I20250130 13:50:59.149470 1534 main.cc:92] Flatcar Update Engine starting Jan 30 13:50:59.150471 tar[1547]: linux-amd64/helm Jan 30 13:50:59.152069 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:50:59.153571 update_engine[1534]: I20250130 13:50:59.153522 1534 update_check_scheduler.cc:74] Next update check in 3m9s Jan 30 13:50:59.154706 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:50:59.161176 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:50:59.163319 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:50:59.195341 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:50:59.195341 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:50:59.195341 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:50:59.203140 extend-filesystems[1521]: Resized filesystem in /dev/vda9 Jan 30 13:50:59.199081 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:50:59.199461 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:50:59.210555 systemd-logind[1529]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:50:59.210581 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:50:59.211006 systemd-logind[1529]: New seat seat0. Jan 30 13:50:59.212989 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:50:59.213102 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:50:59.214625 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:50:59.219027 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:50:59.227350 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:50:59.336935 containerd[1551]: time="2025-01-30T13:50:59.336811560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:50:59.362957 containerd[1551]: time="2025-01-30T13:50:59.362691354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:50:59.364391 containerd[1551]: time="2025-01-30T13:50:59.364359843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:50:59.364391 containerd[1551]: time="2025-01-30T13:50:59.364389168Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:50:59.364463 containerd[1551]: time="2025-01-30T13:50:59.364404577Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:50:59.364600 containerd[1551]: time="2025-01-30T13:50:59.364579905Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:50:59.364639 containerd[1551]: time="2025-01-30T13:50:59.364601416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:50:59.364717 containerd[1551]: time="2025-01-30T13:50:59.364669523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:50:59.364717 containerd[1551]: time="2025-01-30T13:50:59.364686956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:50:59.364964 containerd[1551]: time="2025-01-30T13:50:59.364942145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:50:59.364964 containerd[1551]: time="2025-01-30T13:50:59.364960519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:50:59.365023 containerd[1551]: time="2025-01-30T13:50:59.364978643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:50:59.365023 containerd[1551]: time="2025-01-30T13:50:59.365000875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:50:59.365141 containerd[1551]: time="2025-01-30T13:50:59.365095011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:50:59.365368 containerd[1551]: time="2025-01-30T13:50:59.365347675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:50:59.365538 containerd[1551]: time="2025-01-30T13:50:59.365517283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:50:59.365538 containerd[1551]: time="2025-01-30T13:50:59.365534956Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:50:59.365664 containerd[1551]: time="2025-01-30T13:50:59.365628732Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:50:59.365718 containerd[1551]: time="2025-01-30T13:50:59.365700547Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:50:59.370929 containerd[1551]: time="2025-01-30T13:50:59.370889656Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:50:59.370974 containerd[1551]: time="2025-01-30T13:50:59.370953666Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:50:59.370974 containerd[1551]: time="2025-01-30T13:50:59.370971460Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:50:59.371092 containerd[1551]: time="2025-01-30T13:50:59.370987399Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:50:59.371092 containerd[1551]: time="2025-01-30T13:50:59.371016414Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:50:59.371256 containerd[1551]: time="2025-01-30T13:50:59.371181473Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:50:59.371527 containerd[1551]: time="2025-01-30T13:50:59.371504860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:50:59.371639 containerd[1551]: time="2025-01-30T13:50:59.371612652Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:50:59.371639 containerd[1551]: time="2025-01-30T13:50:59.371633972Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:50:59.371703 containerd[1551]: time="2025-01-30T13:50:59.371648870Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:50:59.371703 containerd[1551]: time="2025-01-30T13:50:59.371663106Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:50:59.371703 containerd[1551]: time="2025-01-30T13:50:59.371676672Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:50:59.371703 containerd[1551]: time="2025-01-30T13:50:59.371689867Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:50:59.371703 containerd[1551]: time="2025-01-30T13:50:59.371704203Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:50:59.371792 containerd[1551]: time="2025-01-30T13:50:59.371721205Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:50:59.371792 containerd[1551]: time="2025-01-30T13:50:59.371734470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:50:59.371792 containerd[1551]: time="2025-01-30T13:50:59.371746593Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:50:59.371792 containerd[1551]: time="2025-01-30T13:50:59.371759587Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:50:59.371792 containerd[1551]: time="2025-01-30T13:50:59.371779565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.371792 containerd[1551]: time="2025-01-30T13:50:59.371793521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.371902 containerd[1551]: time="2025-01-30T13:50:59.371806656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.371902 containerd[1551]: time="2025-01-30T13:50:59.371821453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.371902 containerd[1551]: time="2025-01-30T13:50:59.371834888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.371902 containerd[1551]: time="2025-01-30T13:50:59.371849746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.371902 containerd[1551]: time="2025-01-30T13:50:59.371862400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.371902 containerd[1551]: time="2025-01-30T13:50:59.371880925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.371902 containerd[1551]: time="2025-01-30T13:50:59.371894641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372058 containerd[1551]: time="2025-01-30T13:50:59.371915329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372058 containerd[1551]: time="2025-01-30T13:50:59.371927983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372058 containerd[1551]: time="2025-01-30T13:50:59.371941719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372058 containerd[1551]: time="2025-01-30T13:50:59.371956877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372058 containerd[1551]: time="2025-01-30T13:50:59.371977877Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:50:59.372058 containerd[1551]: time="2025-01-30T13:50:59.372006971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372058 containerd[1551]: time="2025-01-30T13:50:59.372018493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372058 containerd[1551]: time="2025-01-30T13:50:59.372029604Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:50:59.372220 containerd[1551]: time="2025-01-30T13:50:59.372072574Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:50:59.372220 containerd[1551]: time="2025-01-30T13:50:59.372090117Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:50:59.372220 containerd[1551]: time="2025-01-30T13:50:59.372104664Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:50:59.372220 containerd[1551]: time="2025-01-30T13:50:59.372115895Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:50:59.372220 containerd[1551]: time="2025-01-30T13:50:59.372126816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372220 containerd[1551]: time="2025-01-30T13:50:59.372142195Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:50:59.372220 containerd[1551]: time="2025-01-30T13:50:59.372169686Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:50:59.372220 containerd[1551]: time="2025-01-30T13:50:59.372181188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:50:59.372618 containerd[1551]: time="2025-01-30T13:50:59.372553566Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:50:59.372618 containerd[1551]: time="2025-01-30T13:50:59.372611895Z" level=info msg="Connect containerd service" Jan 30 13:50:59.372789 containerd[1551]: time="2025-01-30T13:50:59.372647121Z" level=info msg="using legacy CRI server" Jan 30 13:50:59.372789 containerd[1551]: time="2025-01-30T13:50:59.372654956Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:50:59.372789 containerd[1551]: time="2025-01-30T13:50:59.372746818Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:50:59.374688 containerd[1551]: time="2025-01-30T13:50:59.374655898Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:50:59.374841 containerd[1551]: time="2025-01-30T13:50:59.374803815Z" level=info msg="Start subscribing containerd event" Jan 30 13:50:59.374869 containerd[1551]: time="2025-01-30T13:50:59.374853879Z" level=info msg="Start recovering state" Jan 30 13:50:59.374926 containerd[1551]: time="2025-01-30T13:50:59.374910395Z" level=info msg="Start event monitor" Jan 30 13:50:59.374926 containerd[1551]: time="2025-01-30T13:50:59.374924371Z" level=info msg="Start snapshots syncer" Jan 30 13:50:59.374971 containerd[1551]: time="2025-01-30T13:50:59.374932977Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:50:59.374971 containerd[1551]: time="2025-01-30T13:50:59.374941123Z" level=info msg="Start streaming server" Jan 30 13:50:59.375354 containerd[1551]: time="2025-01-30T13:50:59.375332877Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:50:59.375535 containerd[1551]: time="2025-01-30T13:50:59.375400374Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:50:59.375555 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:50:59.376843 containerd[1551]: time="2025-01-30T13:50:59.376117428Z" level=info msg="containerd successfully booted in 0.040858s" Jan 30 13:50:59.449778 sshd_keygen[1542]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:50:59.472536 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:50:59.482826 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:50:59.489695 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:50:59.490064 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:50:59.504388 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:50:59.514811 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:50:59.525550 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:50:59.528174 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:50:59.529693 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:50:59.556535 tar[1547]: linux-amd64/LICENSE Jan 30 13:50:59.556607 tar[1547]: linux-amd64/README.md Jan 30 13:50:59.575098 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:50:59.777305 systemd-networkd[1246]: eth0: Gained IPv6LL Jan 30 13:50:59.780205 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:50:59.782086 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:50:59.793355 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:50:59.795914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:50:59.798602 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:50:59.818033 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:50:59.818394 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:50:59.820078 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:50:59.822858 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:51:00.423561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:00.425321 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:51:00.427510 systemd[1]: Startup finished in 6.355s (kernel) + 4.142s (userspace) = 10.498s. Jan 30 13:51:00.429064 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:00.848296 kubelet[1655]: E0130 13:51:00.848152 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:00.852506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:00.852806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:05.281492 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:51:05.298466 systemd[1]: Started sshd@0-10.0.0.158:22-10.0.0.1:38158.service - OpenSSH per-connection server daemon (10.0.0.1:38158). Jan 30 13:51:05.334482 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 38158 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:51:05.336733 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:05.346204 systemd-logind[1529]: New session 1 of user core. Jan 30 13:51:05.347373 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:51:05.357370 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:51:05.368885 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:51:05.371393 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:51:05.379382 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:51:05.491434 systemd[1676]: Queued start job for default target default.target. Jan 30 13:51:05.491816 systemd[1676]: Created slice app.slice - User Application Slice. Jan 30 13:51:05.491838 systemd[1676]: Reached target paths.target - Paths. Jan 30 13:51:05.491851 systemd[1676]: Reached target timers.target - Timers. Jan 30 13:51:05.500259 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:51:05.506416 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:51:05.506481 systemd[1676]: Reached target sockets.target - Sockets. Jan 30 13:51:05.506494 systemd[1676]: Reached target basic.target - Basic System. Jan 30 13:51:05.506531 systemd[1676]: Reached target default.target - Main User Target. Jan 30 13:51:05.506561 systemd[1676]: Startup finished in 120ms. Jan 30 13:51:05.507610 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:51:05.509451 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:51:05.575408 systemd[1]: Started sshd@1-10.0.0.158:22-10.0.0.1:38160.service - OpenSSH per-connection server daemon (10.0.0.1:38160). Jan 30 13:51:05.602835 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 38160 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:51:05.604450 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:05.608370 systemd-logind[1529]: New session 2 of user core. Jan 30 13:51:05.614444 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:51:05.668180 sshd[1688]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:05.679459 systemd[1]: Started sshd@2-10.0.0.158:22-10.0.0.1:38166.service - OpenSSH per-connection server daemon (10.0.0.1:38166). Jan 30 13:51:05.680016 systemd[1]: sshd@1-10.0.0.158:22-10.0.0.1:38160.service: Deactivated successfully. Jan 30 13:51:05.682718 systemd-logind[1529]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:51:05.683572 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:51:05.684601 systemd-logind[1529]: Removed session 2. Jan 30 13:51:05.709821 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 38166 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:51:05.711273 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:05.715153 systemd-logind[1529]: New session 3 of user core. Jan 30 13:51:05.725455 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:51:05.774223 sshd[1693]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:05.782444 systemd[1]: Started sshd@3-10.0.0.158:22-10.0.0.1:38180.service - OpenSSH per-connection server daemon (10.0.0.1:38180). Jan 30 13:51:05.783055 systemd[1]: sshd@2-10.0.0.158:22-10.0.0.1:38166.service: Deactivated successfully. Jan 30 13:51:05.785899 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:51:05.786572 systemd-logind[1529]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:51:05.788125 systemd-logind[1529]: Removed session 3. Jan 30 13:51:05.808607 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 38180 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:51:05.810092 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:05.814538 systemd-logind[1529]: New session 4 of user core. Jan 30 13:51:05.832514 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:51:05.887398 sshd[1701]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:05.897467 systemd[1]: Started sshd@4-10.0.0.158:22-10.0.0.1:38190.service - OpenSSH per-connection server daemon (10.0.0.1:38190). Jan 30 13:51:05.897965 systemd[1]: sshd@3-10.0.0.158:22-10.0.0.1:38180.service: Deactivated successfully. Jan 30 13:51:05.900302 systemd-logind[1529]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:51:05.901273 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:51:05.902247 systemd-logind[1529]: Removed session 4. Jan 30 13:51:05.923461 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 38190 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:51:05.924941 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:05.929065 systemd-logind[1529]: New session 5 of user core. Jan 30 13:51:05.939425 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:51:05.996831 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:51:05.997200 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:06.019392 sudo[1716]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:06.021528 sshd[1709]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:06.039425 systemd[1]: Started sshd@5-10.0.0.158:22-10.0.0.1:38194.service - OpenSSH per-connection server daemon (10.0.0.1:38194). Jan 30 13:51:06.039890 systemd[1]: sshd@4-10.0.0.158:22-10.0.0.1:38190.service: Deactivated successfully. Jan 30 13:51:06.042405 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:51:06.043038 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:51:06.044009 systemd-logind[1529]: Removed session 5. Jan 30 13:51:06.066760 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 38194 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:51:06.068238 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:06.072026 systemd-logind[1529]: New session 6 of user core. Jan 30 13:51:06.081453 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:51:06.134057 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:51:06.134408 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:06.137943 sudo[1726]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:06.144112 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:51:06.144475 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:06.165364 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:06.167135 auditctl[1729]: No rules Jan 30 13:51:06.168453 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:51:06.168811 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:06.170689 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:06.201311 augenrules[1748]: No rules Jan 30 13:51:06.202509 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:06.203873 sudo[1725]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:06.205656 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:06.224464 systemd[1]: Started sshd@6-10.0.0.158:22-10.0.0.1:38210.service - OpenSSH per-connection server daemon (10.0.0.1:38210). Jan 30 13:51:06.225045 systemd[1]: sshd@5-10.0.0.158:22-10.0.0.1:38194.service: Deactivated successfully. Jan 30 13:51:06.226753 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:51:06.227511 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:51:06.228868 systemd-logind[1529]: Removed session 6. Jan 30 13:51:06.251225 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 38210 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:51:06.252605 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:06.256464 systemd-logind[1529]: New session 7 of user core. Jan 30 13:51:06.266466 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:51:06.319727 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:51:06.320070 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:06.602368 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:51:06.602634 (dockerd)[1779]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:51:06.870425 dockerd[1779]: time="2025-01-30T13:51:06.870276892Z" level=info msg="Starting up" Jan 30 13:51:07.805177 dockerd[1779]: time="2025-01-30T13:51:07.805106736Z" level=info msg="Loading containers: start." Jan 30 13:51:07.914196 kernel: Initializing XFRM netlink socket Jan 30 13:51:07.990993 systemd-networkd[1246]: docker0: Link UP Jan 30 13:51:08.009006 dockerd[1779]: time="2025-01-30T13:51:08.008956016Z" level=info msg="Loading containers: done." Jan 30 13:51:08.025879 dockerd[1779]: time="2025-01-30T13:51:08.025824055Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:51:08.026048 dockerd[1779]: time="2025-01-30T13:51:08.025954915Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:51:08.026108 dockerd[1779]: time="2025-01-30T13:51:08.026086185Z" level=info msg="Daemon has completed initialization" Jan 30 13:51:08.067412 dockerd[1779]: time="2025-01-30T13:51:08.067295918Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:51:08.067790 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:51:08.859935 containerd[1551]: time="2025-01-30T13:51:08.859892429Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:51:09.480657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373331270.mount: Deactivated successfully. Jan 30 13:51:10.441615 containerd[1551]: time="2025-01-30T13:51:10.441558307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:10.442333 containerd[1551]: time="2025-01-30T13:51:10.442262563Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:51:10.443752 containerd[1551]: time="2025-01-30T13:51:10.443725239Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:10.446555 containerd[1551]: time="2025-01-30T13:51:10.446506348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:10.447570 containerd[1551]: time="2025-01-30T13:51:10.447517163Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.587582906s" Jan 30 13:51:10.447607 containerd[1551]: time="2025-01-30T13:51:10.447570768Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:51:10.471108 containerd[1551]: time="2025-01-30T13:51:10.471057176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:51:11.102963 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:51:11.127383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:11.292345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:11.296930 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:11.709663 kubelet[2009]: E0130 13:51:11.709604 2009 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:11.717011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:11.717367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:12.456966 containerd[1551]: time="2025-01-30T13:51:12.456905797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:12.483073 containerd[1551]: time="2025-01-30T13:51:12.483000863Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:51:12.513432 containerd[1551]: time="2025-01-30T13:51:12.513378396Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:12.553278 containerd[1551]: time="2025-01-30T13:51:12.553240924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:12.554290 containerd[1551]: time="2025-01-30T13:51:12.554249316Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.083152509s" Jan 30 13:51:12.554290 containerd[1551]: time="2025-01-30T13:51:12.554295977Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:51:12.576525 containerd[1551]: time="2025-01-30T13:51:12.576483108Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:51:13.740873 containerd[1551]: time="2025-01-30T13:51:13.740798060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:13.741724 containerd[1551]: time="2025-01-30T13:51:13.741651613Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:51:13.743098 containerd[1551]: time="2025-01-30T13:51:13.743069569Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:13.748180 containerd[1551]: time="2025-01-30T13:51:13.748138300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:13.749127 containerd[1551]: time="2025-01-30T13:51:13.749088823Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.172563716s" Jan 30 13:51:13.749127 containerd[1551]: time="2025-01-30T13:51:13.749131104Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:51:13.771539 containerd[1551]: time="2025-01-30T13:51:13.771506186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:51:14.941277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296115042.mount: Deactivated successfully. Jan 30 13:51:15.462719 containerd[1551]: time="2025-01-30T13:51:15.462665809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:15.463579 containerd[1551]: time="2025-01-30T13:51:15.463539534Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:51:15.464765 containerd[1551]: time="2025-01-30T13:51:15.464715273Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:15.466601 containerd[1551]: time="2025-01-30T13:51:15.466565788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:15.467178 containerd[1551]: time="2025-01-30T13:51:15.467133144Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.695597771s" Jan 30 13:51:15.467220 containerd[1551]: time="2025-01-30T13:51:15.467184918Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:51:15.489386 containerd[1551]: time="2025-01-30T13:51:15.489345196Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:51:16.002840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405721491.mount: Deactivated successfully. Jan 30 13:51:17.104628 containerd[1551]: time="2025-01-30T13:51:17.104569735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.105627 containerd[1551]: time="2025-01-30T13:51:17.105555253Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:51:17.107178 containerd[1551]: time="2025-01-30T13:51:17.107146474Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.110181 containerd[1551]: time="2025-01-30T13:51:17.110129457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.111113 containerd[1551]: time="2025-01-30T13:51:17.111059618Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.621674412s" Jan 30 13:51:17.111146 containerd[1551]: time="2025-01-30T13:51:17.111119298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:51:17.133009 containerd[1551]: time="2025-01-30T13:51:17.132958673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:51:17.644896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370383103.mount: Deactivated successfully. Jan 30 13:51:17.649699 containerd[1551]: time="2025-01-30T13:51:17.649651626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.650302 containerd[1551]: time="2025-01-30T13:51:17.650248503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:51:17.651343 containerd[1551]: time="2025-01-30T13:51:17.651308255Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.653455 containerd[1551]: time="2025-01-30T13:51:17.653400284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.654093 containerd[1551]: time="2025-01-30T13:51:17.654057544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 521.050254ms" Jan 30 13:51:17.654173 containerd[1551]: time="2025-01-30T13:51:17.654095518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:51:17.674789 containerd[1551]: time="2025-01-30T13:51:17.674748078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:51:18.549351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266820456.mount: Deactivated successfully. Jan 30 13:51:20.189332 containerd[1551]: time="2025-01-30T13:51:20.189272041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:20.189961 containerd[1551]: time="2025-01-30T13:51:20.189910594Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:51:20.191227 containerd[1551]: time="2025-01-30T13:51:20.191148990Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:20.193793 containerd[1551]: time="2025-01-30T13:51:20.193739269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:20.194956 containerd[1551]: time="2025-01-30T13:51:20.194927492Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.520151492s" Jan 30 13:51:20.195024 containerd[1551]: time="2025-01-30T13:51:20.194956709Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:51:21.967480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:51:21.975300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:22.109424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:22.111468 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:22.151195 kubelet[2251]: E0130 13:51:22.151125 2251 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:22.152762 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:22.155481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:22.155637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:22.156119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:22.167348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:22.184053 systemd[1]: Reloading requested from client PID 2269 ('systemctl') (unit session-7.scope)... Jan 30 13:51:22.184067 systemd[1]: Reloading... Jan 30 13:51:22.265257 zram_generator::config[2311]: No configuration found. Jan 30 13:51:22.671285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:22.742236 systemd[1]: Reloading finished in 557 ms. Jan 30 13:51:22.782068 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:51:22.782180 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:51:22.782548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:22.798556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:22.932571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:22.937016 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:51:22.976397 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:22.976397 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:51:22.976397 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:22.976783 kubelet[2368]: I0130 13:51:22.976442 2368 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:51:23.403603 kubelet[2368]: I0130 13:51:23.403503 2368 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:51:23.403603 kubelet[2368]: I0130 13:51:23.403543 2368 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:51:23.403783 kubelet[2368]: I0130 13:51:23.403762 2368 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:51:23.417641 kubelet[2368]: I0130 13:51:23.417602 2368 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:23.418134 kubelet[2368]: E0130 13:51:23.418109 2368 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.429131 kubelet[2368]: I0130 13:51:23.429104 2368 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:51:23.430915 kubelet[2368]: I0130 13:51:23.430881 2368 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:51:23.431067 kubelet[2368]: I0130 13:51:23.430913 2368 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:51:23.431481 kubelet[2368]: I0130 13:51:23.431456 2368 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:51:23.431481 kubelet[2368]: I0130 13:51:23.431472 2368 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:51:23.431611 kubelet[2368]: I0130 13:51:23.431589 2368 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:23.432207 kubelet[2368]: I0130 13:51:23.432152 2368 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:51:23.432241 kubelet[2368]: I0130 13:51:23.432212 2368 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:51:23.432241 kubelet[2368]: I0130 13:51:23.432233 2368 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:51:23.432281 kubelet[2368]: I0130 13:51:23.432248 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:51:23.435378 kubelet[2368]: W0130 13:51:23.435331 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.436703 kubelet[2368]: W0130 13:51:23.435504 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.436703 kubelet[2368]: E0130 13:51:23.435552 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.436703 kubelet[2368]: E0130 13:51:23.435641 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.436703 kubelet[2368]: I0130 13:51:23.436479 2368 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:51:23.437815 kubelet[2368]: I0130 13:51:23.437786 2368 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:51:23.437953 kubelet[2368]: W0130 13:51:23.437918 2368 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:51:23.439039 kubelet[2368]: I0130 13:51:23.438929 2368 server.go:1264] "Started kubelet" Jan 30 13:51:23.439760 kubelet[2368]: I0130 13:51:23.439250 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:51:23.439760 kubelet[2368]: I0130 13:51:23.439569 2368 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:51:23.439760 kubelet[2368]: I0130 13:51:23.439600 2368 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:51:23.440537 kubelet[2368]: I0130 13:51:23.440508 2368 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:51:23.442260 kubelet[2368]: I0130 13:51:23.442040 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:51:23.444947 kubelet[2368]: I0130 13:51:23.444638 2368 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:51:23.444947 kubelet[2368]: I0130 13:51:23.444747 2368 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:51:23.444947 kubelet[2368]: I0130 13:51:23.444783 2368 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:51:23.445054 kubelet[2368]: W0130 13:51:23.445007 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.445054 kubelet[2368]: E0130 13:51:23.445038 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.445622 kubelet[2368]: E0130 13:51:23.445135 2368 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:51:23.445622 kubelet[2368]: E0130 13:51:23.445380 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.158:6443: connect: connection refused" interval="200ms" Jan 30 13:51:23.446227 kubelet[2368]: E0130 13:51:23.445863 2368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.158:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.158:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7cb19af1f11a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:51:23.438903578 +0000 UTC m=+0.497648407,LastTimestamp:2025-01-30 13:51:23.438903578 +0000 UTC m=+0.497648407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:51:23.446227 kubelet[2368]: I0130 13:51:23.445992 2368 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:51:23.446227 kubelet[2368]: I0130 13:51:23.446077 2368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:51:23.446931 kubelet[2368]: I0130 13:51:23.446917 2368 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:51:23.457168 kubelet[2368]: I0130 13:51:23.457060 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:51:23.458427 kubelet[2368]: I0130 13:51:23.458192 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:51:23.458427 kubelet[2368]: I0130 13:51:23.458222 2368 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:51:23.458427 kubelet[2368]: I0130 13:51:23.458240 2368 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:51:23.458427 kubelet[2368]: E0130 13:51:23.458279 2368 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:51:23.458768 kubelet[2368]: W0130 13:51:23.458664 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.458768 kubelet[2368]: E0130 13:51:23.458705 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:23.468717 kubelet[2368]: I0130 13:51:23.468700 2368 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:51:23.469019 kubelet[2368]: I0130 13:51:23.468791 2368 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:51:23.469019 kubelet[2368]: I0130 13:51:23.468819 2368 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:23.545858 kubelet[2368]: I0130 13:51:23.545825 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:51:23.548373 kubelet[2368]: E0130 13:51:23.548337 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.158:6443/api/v1/nodes\": dial tcp 10.0.0.158:6443: connect: connection refused" node="localhost" Jan 30 13:51:23.558401 kubelet[2368]: E0130 13:51:23.558358 2368 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:51:23.645878 kubelet[2368]: E0130 13:51:23.645842 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.158:6443: connect: connection refused" interval="400ms" Jan 30 13:51:23.709365 kubelet[2368]: I0130 13:51:23.709281 2368 policy_none.go:49] "None policy: Start" Jan 30 13:51:23.709751 kubelet[2368]: I0130 13:51:23.709735 2368 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:51:23.709834 kubelet[2368]: I0130 13:51:23.709757 2368 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:51:23.716405 kubelet[2368]: I0130 13:51:23.716373 2368 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:51:23.716599 kubelet[2368]: I0130 13:51:23.716562 2368 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:51:23.716692 kubelet[2368]: I0130 13:51:23.716672 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:51:23.718136 kubelet[2368]: E0130 13:51:23.718116 2368 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:51:23.749382 kubelet[2368]: I0130 13:51:23.749363 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:51:23.749691 kubelet[2368]: E0130 13:51:23.749659 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.158:6443/api/v1/nodes\": dial tcp 10.0.0.158:6443: connect: connection refused" node="localhost" Jan 30 13:51:23.758800 kubelet[2368]: I0130 13:51:23.758757 2368 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:51:23.759643 kubelet[2368]: I0130 13:51:23.759620 2368 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:51:23.760496 kubelet[2368]: I0130 13:51:23.760473 2368 topology_manager.go:215] "Topology Admit Handler" podUID="a98996ab16dbf93db224f88ab13b9454" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:51:23.846958 kubelet[2368]: I0130 13:51:23.846936 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:23.847029 kubelet[2368]: I0130 13:51:23.846965 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:51:23.847029 kubelet[2368]: I0130 13:51:23.846983 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a98996ab16dbf93db224f88ab13b9454-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a98996ab16dbf93db224f88ab13b9454\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:23.847029 kubelet[2368]: I0130 13:51:23.846998 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:23.847029 kubelet[2368]: I0130 13:51:23.847013 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:23.847029 kubelet[2368]: I0130 13:51:23.847028 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a98996ab16dbf93db224f88ab13b9454-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a98996ab16dbf93db224f88ab13b9454\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:23.847141 kubelet[2368]: I0130 13:51:23.847071 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a98996ab16dbf93db224f88ab13b9454-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a98996ab16dbf93db224f88ab13b9454\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:23.847141 kubelet[2368]: I0130 13:51:23.847090 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:23.847141 kubelet[2368]: I0130 13:51:23.847105 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:24.046415 kubelet[2368]: E0130 13:51:24.046333 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.158:6443: connect: connection refused" interval="800ms" Jan 30 13:51:24.064726 kubelet[2368]: E0130 13:51:24.064709 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:24.064811 kubelet[2368]: E0130 13:51:24.064709 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:24.065385 containerd[1551]: time="2025-01-30T13:51:24.065147623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:24.065385 containerd[1551]: time="2025-01-30T13:51:24.065234083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:24.066425 kubelet[2368]: E0130 13:51:24.066409 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:24.066751 containerd[1551]: time="2025-01-30T13:51:24.066715212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a98996ab16dbf93db224f88ab13b9454,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:24.151069 kubelet[2368]: I0130 13:51:24.151049 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:51:24.151350 kubelet[2368]: E0130 13:51:24.151315 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.158:6443/api/v1/nodes\": dial tcp 10.0.0.158:6443: connect: connection refused" node="localhost" Jan 30 13:51:24.452597 kubelet[2368]: W0130 13:51:24.452495 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:24.452597 kubelet[2368]: E0130 13:51:24.452547 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:24.657512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064110039.mount: Deactivated successfully. Jan 30 13:51:24.658834 kubelet[2368]: W0130 13:51:24.658774 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:24.658893 kubelet[2368]: E0130 13:51:24.658835 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:24.664673 containerd[1551]: time="2025-01-30T13:51:24.664628193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:24.666622 containerd[1551]: time="2025-01-30T13:51:24.666584695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:51:24.667693 containerd[1551]: time="2025-01-30T13:51:24.667652203Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:24.668762 containerd[1551]: time="2025-01-30T13:51:24.668727349Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:24.669688 containerd[1551]: time="2025-01-30T13:51:24.669645962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:51:24.670723 containerd[1551]: time="2025-01-30T13:51:24.670677877Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:24.671638 containerd[1551]: time="2025-01-30T13:51:24.671603466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:51:24.673420 containerd[1551]: time="2025-01-30T13:51:24.673377474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:24.675035 containerd[1551]: time="2025-01-30T13:51:24.675004913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.701509ms" Jan 30 13:51:24.675691 containerd[1551]: time="2025-01-30T13:51:24.675656987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 608.869246ms" Jan 30 13:51:24.678020 containerd[1551]: time="2025-01-30T13:51:24.677984901Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.734039ms" Jan 30 13:51:24.685249 kubelet[2368]: W0130 13:51:24.685201 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:24.685301 kubelet[2368]: E0130 13:51:24.685261 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:24.750569 kubelet[2368]: W0130 13:51:24.750437 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:24.750569 kubelet[2368]: E0130 13:51:24.750506 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.158:6443: connect: connection refused Jan 30 13:51:24.808567 containerd[1551]: time="2025-01-30T13:51:24.808453707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:24.808567 containerd[1551]: time="2025-01-30T13:51:24.808566600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:24.808749 containerd[1551]: time="2025-01-30T13:51:24.808595457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:24.808805 containerd[1551]: time="2025-01-30T13:51:24.808733798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:24.809385 containerd[1551]: time="2025-01-30T13:51:24.809235112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:24.809385 containerd[1551]: time="2025-01-30T13:51:24.809287773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:24.809385 containerd[1551]: time="2025-01-30T13:51:24.809355902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:24.809867 containerd[1551]: time="2025-01-30T13:51:24.809790350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:24.809867 containerd[1551]: time="2025-01-30T13:51:24.809837439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:24.809867 containerd[1551]: time="2025-01-30T13:51:24.809852054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:24.810388 containerd[1551]: time="2025-01-30T13:51:24.809929965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:24.810800 containerd[1551]: time="2025-01-30T13:51:24.810558844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:24.847524 kubelet[2368]: E0130 13:51:24.847480 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.158:6443: connect: connection refused" interval="1.6s" Jan 30 13:51:24.865236 containerd[1551]: time="2025-01-30T13:51:24.865184339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a58abc1c35c17e0c3870e1b6be14a0bbb489d54869ea1ebc0b6afd35720bd433\"" Jan 30 13:51:24.866198 containerd[1551]: time="2025-01-30T13:51:24.866052264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e88822c84c41203d07ad04282e06160f56284f14e4591626a308463a1ad3b274\"" Jan 30 13:51:24.866896 containerd[1551]: time="2025-01-30T13:51:24.866860631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a98996ab16dbf93db224f88ab13b9454,Namespace:kube-system,Attempt:0,} returns sandbox id \"0286fc60b695290102d1489583a6a37281270eb38bab8f9218bcd7ca3bb8ad2b\"" Jan 30 13:51:24.866935 kubelet[2368]: E0130 13:51:24.866887 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:24.867111 kubelet[2368]: E0130 13:51:24.867095 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:24.868153 kubelet[2368]: E0130 13:51:24.868132 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:24.869468 containerd[1551]: time="2025-01-30T13:51:24.869437914Z" level=info msg="CreateContainer within sandbox \"a58abc1c35c17e0c3870e1b6be14a0bbb489d54869ea1ebc0b6afd35720bd433\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:51:24.869661 containerd[1551]: time="2025-01-30T13:51:24.869637237Z" level=info msg="CreateContainer within sandbox \"e88822c84c41203d07ad04282e06160f56284f14e4591626a308463a1ad3b274\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:51:24.870598 containerd[1551]: time="2025-01-30T13:51:24.870577459Z" level=info msg="CreateContainer within sandbox \"0286fc60b695290102d1489583a6a37281270eb38bab8f9218bcd7ca3bb8ad2b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:51:24.895989 containerd[1551]: time="2025-01-30T13:51:24.895955856Z" level=info msg="CreateContainer within sandbox \"e88822c84c41203d07ad04282e06160f56284f14e4591626a308463a1ad3b274\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24d689dbb8bd58a58316520d5aa19669b9931faf8400aa82abb69f565c5c7036\"" Jan 30 13:51:24.896451 containerd[1551]: time="2025-01-30T13:51:24.896423442Z" level=info msg="StartContainer for \"24d689dbb8bd58a58316520d5aa19669b9931faf8400aa82abb69f565c5c7036\"" Jan 30 13:51:24.898368 containerd[1551]: time="2025-01-30T13:51:24.898298865Z" level=info msg="CreateContainer within sandbox \"a58abc1c35c17e0c3870e1b6be14a0bbb489d54869ea1ebc0b6afd35720bd433\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"956b46bde99448bd4ce6d46d734d5297da836d9ff2af6d33a764ada36d33db86\"" Jan 30 13:51:24.902111 containerd[1551]: time="2025-01-30T13:51:24.899417081Z" level=info msg="StartContainer for \"956b46bde99448bd4ce6d46d734d5297da836d9ff2af6d33a764ada36d33db86\"" Jan 30 13:51:24.904548 containerd[1551]: time="2025-01-30T13:51:24.904527565Z" level=info msg="CreateContainer within sandbox \"0286fc60b695290102d1489583a6a37281270eb38bab8f9218bcd7ca3bb8ad2b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e2524ba7bb2084c68de4d5488729541eafd15cf047b426b0f8c398f793a3e363\"" Jan 30 13:51:24.905075 containerd[1551]: time="2025-01-30T13:51:24.905033210Z" level=info msg="StartContainer for \"e2524ba7bb2084c68de4d5488729541eafd15cf047b426b0f8c398f793a3e363\"" Jan 30 13:51:24.953425 kubelet[2368]: I0130 13:51:24.953390 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:51:24.953818 kubelet[2368]: E0130 13:51:24.953773 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.158:6443/api/v1/nodes\": dial tcp 10.0.0.158:6443: connect: connection refused" node="localhost" Jan 30 13:51:24.970978 containerd[1551]: time="2025-01-30T13:51:24.970375483Z" level=info msg="StartContainer for \"e2524ba7bb2084c68de4d5488729541eafd15cf047b426b0f8c398f793a3e363\" returns successfully" Jan 30 13:51:24.970978 containerd[1551]: time="2025-01-30T13:51:24.970945646Z" level=info msg="StartContainer for \"956b46bde99448bd4ce6d46d734d5297da836d9ff2af6d33a764ada36d33db86\" returns successfully" Jan 30 13:51:24.971935 containerd[1551]: time="2025-01-30T13:51:24.971737896Z" level=info msg="StartContainer for \"24d689dbb8bd58a58316520d5aa19669b9931faf8400aa82abb69f565c5c7036\" returns successfully" Jan 30 13:51:25.471607 kubelet[2368]: E0130 13:51:25.471575 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:25.475824 kubelet[2368]: E0130 13:51:25.474948 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:25.476612 kubelet[2368]: E0130 13:51:25.476532 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:26.233453 kubelet[2368]: E0130 13:51:26.233411 2368 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 30 13:51:26.450917 kubelet[2368]: E0130 13:51:26.450879 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:51:26.478870 kubelet[2368]: E0130 13:51:26.478849 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:26.555799 kubelet[2368]: I0130 13:51:26.555698 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:51:26.562591 kubelet[2368]: I0130 13:51:26.562560 2368 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:51:26.567915 kubelet[2368]: E0130 13:51:26.567884 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:26.668227 kubelet[2368]: E0130 13:51:26.668179 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:26.768716 kubelet[2368]: E0130 13:51:26.768674 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:26.869273 kubelet[2368]: E0130 13:51:26.869137 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:26.970284 kubelet[2368]: E0130 13:51:26.970234 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:27.070792 kubelet[2368]: E0130 13:51:27.070759 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:27.171561 kubelet[2368]: E0130 13:51:27.171435 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:27.272035 kubelet[2368]: E0130 13:51:27.271983 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:27.373142 kubelet[2368]: E0130 13:51:27.373096 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:51:27.436208 kubelet[2368]: I0130 13:51:27.436083 2368 apiserver.go:52] "Watching apiserver" Jan 30 13:51:27.446767 kubelet[2368]: I0130 13:51:27.445715 2368 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:51:27.487237 kubelet[2368]: E0130 13:51:27.487204 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:27.867507 systemd[1]: Reloading requested from client PID 2641 ('systemctl') (unit session-7.scope)... Jan 30 13:51:27.867523 systemd[1]: Reloading... Jan 30 13:51:27.923367 kubelet[2368]: E0130 13:51:27.922979 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:27.942285 zram_generator::config[2686]: No configuration found. Jan 30 13:51:28.060770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:28.140198 systemd[1]: Reloading finished in 272 ms. Jan 30 13:51:28.173372 kubelet[2368]: I0130 13:51:28.173331 2368 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:28.173354 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:28.196482 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:51:28.196892 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:28.214347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:28.353666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:28.359176 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:51:28.402255 kubelet[2735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:28.402255 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:51:28.402255 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:28.402639 kubelet[2735]: I0130 13:51:28.402238 2735 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:51:28.406550 kubelet[2735]: I0130 13:51:28.406510 2735 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:51:28.406550 kubelet[2735]: I0130 13:51:28.406537 2735 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:51:28.408980 kubelet[2735]: I0130 13:51:28.408955 2735 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:51:28.410201 kubelet[2735]: I0130 13:51:28.410180 2735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:51:28.411196 kubelet[2735]: I0130 13:51:28.411148 2735 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:28.418174 kubelet[2735]: I0130 13:51:28.418140 2735 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:51:28.418671 kubelet[2735]: I0130 13:51:28.418637 2735 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:51:28.418810 kubelet[2735]: I0130 13:51:28.418666 2735 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:51:28.418883 kubelet[2735]: I0130 13:51:28.418823 2735 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:51:28.418883 kubelet[2735]: I0130 13:51:28.418834 2735 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:51:28.418883 kubelet[2735]: I0130 13:51:28.418878 2735 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:28.418992 kubelet[2735]: I0130 13:51:28.418971 2735 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:51:28.418992 kubelet[2735]: I0130 13:51:28.418986 2735 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:51:28.419032 kubelet[2735]: I0130 13:51:28.419007 2735 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:51:28.419032 kubelet[2735]: I0130 13:51:28.419028 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:51:28.419788 kubelet[2735]: I0130 13:51:28.419759 2735 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:51:28.419969 kubelet[2735]: I0130 13:51:28.419949 2735 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:51:28.421476 kubelet[2735]: I0130 13:51:28.420338 2735 server.go:1264] "Started kubelet" Jan 30 13:51:28.421476 kubelet[2735]: I0130 13:51:28.420790 2735 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:51:28.421623 kubelet[2735]: I0130 13:51:28.421551 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:51:28.421882 kubelet[2735]: I0130 13:51:28.421657 2735 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:51:28.421882 kubelet[2735]: I0130 13:51:28.421435 2735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:51:28.423238 kubelet[2735]: I0130 13:51:28.422459 2735 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:51:28.427237 kubelet[2735]: I0130 13:51:28.427202 2735 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:51:28.431662 kubelet[2735]: I0130 13:51:28.430867 2735 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:51:28.431662 kubelet[2735]: I0130 13:51:28.431031 2735 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:51:28.435117 kubelet[2735]: E0130 13:51:28.435091 2735 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:51:28.435885 kubelet[2735]: I0130 13:51:28.435836 2735 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:51:28.435885 kubelet[2735]: I0130 13:51:28.435854 2735 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:51:28.435962 kubelet[2735]: I0130 13:51:28.435914 2735 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:51:28.438096 kubelet[2735]: I0130 13:51:28.438057 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:51:28.439332 kubelet[2735]: I0130 13:51:28.439306 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:51:28.439396 kubelet[2735]: I0130 13:51:28.439336 2735 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:51:28.439396 kubelet[2735]: I0130 13:51:28.439355 2735 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:51:28.439448 kubelet[2735]: E0130 13:51:28.439393 2735 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:51:28.483340 kubelet[2735]: I0130 13:51:28.483305 2735 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:51:28.483340 kubelet[2735]: I0130 13:51:28.483327 2735 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:51:28.483340 kubelet[2735]: I0130 13:51:28.483347 2735 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:28.483507 kubelet[2735]: I0130 13:51:28.483490 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:51:28.483606 kubelet[2735]: I0130 13:51:28.483504 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:51:28.483606 kubelet[2735]: I0130 13:51:28.483524 2735 policy_none.go:49] "None policy: Start" Jan 30 13:51:28.484217 kubelet[2735]: I0130 13:51:28.484186 2735 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:51:28.484217 kubelet[2735]: I0130 13:51:28.484214 2735 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:51:28.484424 kubelet[2735]: I0130 13:51:28.484397 2735 state_mem.go:75] "Updated machine memory state" Jan 30 13:51:28.486389 kubelet[2735]: I0130 13:51:28.485865 2735 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:51:28.486389 kubelet[2735]: I0130 13:51:28.486033 2735 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:51:28.486389 kubelet[2735]: I0130 13:51:28.486115 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:51:28.532103 kubelet[2735]: I0130 13:51:28.532072 2735 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:51:28.538254 kubelet[2735]: I0130 13:51:28.538223 2735 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 13:51:28.538359 kubelet[2735]: I0130 13:51:28.538286 2735 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:51:28.539660 kubelet[2735]: I0130 13:51:28.539610 2735 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:51:28.539748 kubelet[2735]: I0130 13:51:28.539714 2735 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:51:28.539800 kubelet[2735]: I0130 13:51:28.539760 2735 topology_manager.go:215] "Topology Admit Handler" podUID="a98996ab16dbf93db224f88ab13b9454" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:51:28.545174 kubelet[2735]: E0130 13:51:28.545065 2735 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:28.545485 kubelet[2735]: E0130 13:51:28.545454 2735 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:28.733022 kubelet[2735]: I0130 13:51:28.732928 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a98996ab16dbf93db224f88ab13b9454-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a98996ab16dbf93db224f88ab13b9454\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:28.733022 kubelet[2735]: I0130 13:51:28.732969 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a98996ab16dbf93db224f88ab13b9454-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a98996ab16dbf93db224f88ab13b9454\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:28.733022 kubelet[2735]: I0130 13:51:28.732990 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:28.733022 kubelet[2735]: I0130 13:51:28.733009 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:28.733022 kubelet[2735]: I0130 13:51:28.733024 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:51:28.733185 kubelet[2735]: I0130 13:51:28.733040 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a98996ab16dbf93db224f88ab13b9454-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a98996ab16dbf93db224f88ab13b9454\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:28.733185 kubelet[2735]: I0130 13:51:28.733057 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:28.733185 kubelet[2735]: I0130 13:51:28.733072 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:28.733185 kubelet[2735]: I0130 13:51:28.733088 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:28.846915 kubelet[2735]: E0130 13:51:28.846780 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:28.847075 kubelet[2735]: E0130 13:51:28.847059 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:28.847214 kubelet[2735]: E0130 13:51:28.847074 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:29.420295 kubelet[2735]: I0130 13:51:29.420254 2735 apiserver.go:52] "Watching apiserver" Jan 30 13:51:29.925130 kubelet[2735]: E0130 13:51:29.924954 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:29.933334 kubelet[2735]: E0130 13:51:29.931459 2735 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:51:29.933334 kubelet[2735]: E0130 13:51:29.931865 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:29.933334 kubelet[2735]: E0130 13:51:29.932199 2735 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:51:29.933334 kubelet[2735]: I0130 13:51:29.932330 2735 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:51:29.933334 kubelet[2735]: E0130 13:51:29.932501 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:29.942734 kubelet[2735]: I0130 13:51:29.942561 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.942544308 podStartE2EDuration="2.942544308s" podCreationTimestamp="2025-01-30 13:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:29.942401347 +0000 UTC m=+1.579026377" watchObservedRunningTime="2025-01-30 13:51:29.942544308 +0000 UTC m=+1.579169338" Jan 30 13:51:29.956632 kubelet[2735]: I0130 13:51:29.956573 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.956556952 podStartE2EDuration="2.956556952s" podCreationTimestamp="2025-01-30 13:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:29.956538231 +0000 UTC m=+1.593163262" watchObservedRunningTime="2025-01-30 13:51:29.956556952 +0000 UTC m=+1.593181982" Jan 30 13:51:29.956803 kubelet[2735]: I0130 13:51:29.956640 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.956636759 podStartE2EDuration="1.956636759s" podCreationTimestamp="2025-01-30 13:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:29.949791316 +0000 UTC m=+1.586416346" watchObservedRunningTime="2025-01-30 13:51:29.956636759 +0000 UTC m=+1.593261789" Jan 30 13:51:29.986260 systemd-resolved[1463]: Under memory pressure, flushing caches. Jan 30 13:51:29.986300 systemd-resolved[1463]: Flushed all caches. Jan 30 13:51:29.988186 systemd-journald[1159]: Under memory pressure, flushing caches. Jan 30 13:51:30.927628 kubelet[2735]: E0130 13:51:30.927592 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:30.928069 kubelet[2735]: E0130 13:51:30.927883 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:32.724146 kubelet[2735]: E0130 13:51:32.724106 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:33.333269 sudo[1761]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:33.335236 sshd[1755]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:33.339342 systemd[1]: sshd@6-10.0.0.158:22-10.0.0.1:38210.service: Deactivated successfully. Jan 30 13:51:33.341431 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:51:33.341991 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:51:33.342894 systemd-logind[1529]: Removed session 7. Jan 30 13:51:34.053227 kubelet[2735]: E0130 13:51:34.053197 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:34.932379 kubelet[2735]: E0130 13:51:34.932337 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:39.872862 kubelet[2735]: E0130 13:51:39.872827 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:42.727787 kubelet[2735]: E0130 13:51:42.727753 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:43.025425 kubelet[2735]: I0130 13:51:43.025380 2735 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:51:43.025787 containerd[1551]: time="2025-01-30T13:51:43.025738963Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:51:43.026203 kubelet[2735]: I0130 13:51:43.025874 2735 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:51:43.809301 kubelet[2735]: I0130 13:51:43.809130 2735 topology_manager.go:215] "Topology Admit Handler" podUID="1e1f084b-0ec9-4a78-bd71-feb85592c546" podNamespace="kube-system" podName="kube-proxy-bsc26" Jan 30 13:51:43.825293 kubelet[2735]: I0130 13:51:43.825261 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e1f084b-0ec9-4a78-bd71-feb85592c546-xtables-lock\") pod \"kube-proxy-bsc26\" (UID: \"1e1f084b-0ec9-4a78-bd71-feb85592c546\") " pod="kube-system/kube-proxy-bsc26" Jan 30 13:51:43.825293 kubelet[2735]: I0130 13:51:43.825294 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e1f084b-0ec9-4a78-bd71-feb85592c546-kube-proxy\") pod \"kube-proxy-bsc26\" (UID: \"1e1f084b-0ec9-4a78-bd71-feb85592c546\") " pod="kube-system/kube-proxy-bsc26" Jan 30 13:51:43.825460 kubelet[2735]: I0130 13:51:43.825311 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e1f084b-0ec9-4a78-bd71-feb85592c546-lib-modules\") pod \"kube-proxy-bsc26\" (UID: \"1e1f084b-0ec9-4a78-bd71-feb85592c546\") " pod="kube-system/kube-proxy-bsc26" Jan 30 13:51:43.825460 kubelet[2735]: I0130 13:51:43.825325 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sbmj\" (UniqueName: \"kubernetes.io/projected/1e1f084b-0ec9-4a78-bd71-feb85592c546-kube-api-access-9sbmj\") pod \"kube-proxy-bsc26\" (UID: \"1e1f084b-0ec9-4a78-bd71-feb85592c546\") " pod="kube-system/kube-proxy-bsc26" Jan 30 13:51:43.930039 kubelet[2735]: E0130 13:51:43.930012 2735 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 13:51:43.930039 kubelet[2735]: E0130 13:51:43.930036 2735 projected.go:200] Error preparing data for projected volume kube-api-access-9sbmj for pod kube-system/kube-proxy-bsc26: configmap "kube-root-ca.crt" not found Jan 30 13:51:43.930217 kubelet[2735]: E0130 13:51:43.930095 2735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e1f084b-0ec9-4a78-bd71-feb85592c546-kube-api-access-9sbmj podName:1e1f084b-0ec9-4a78-bd71-feb85592c546 nodeName:}" failed. No retries permitted until 2025-01-30 13:51:44.430075554 +0000 UTC m=+16.066700584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9sbmj" (UniqueName: "kubernetes.io/projected/1e1f084b-0ec9-4a78-bd71-feb85592c546-kube-api-access-9sbmj") pod "kube-proxy-bsc26" (UID: "1e1f084b-0ec9-4a78-bd71-feb85592c546") : configmap "kube-root-ca.crt" not found Jan 30 13:51:43.938256 update_engine[1534]: I20250130 13:51:43.938180 1534 update_attempter.cc:509] Updating boot flags... Jan 30 13:51:43.965197 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2831) Jan 30 13:51:44.001436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2834) Jan 30 13:51:44.032181 kubelet[2735]: I0130 13:51:44.029442 2735 topology_manager.go:215] "Topology Admit Handler" podUID="21b48dcb-78ca-4545-b2cd-530973a1e4ac" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-8qbk4" Jan 30 13:51:44.051341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2834) Jan 30 13:51:44.129098 kubelet[2735]: I0130 13:51:44.128972 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gtm\" (UniqueName: \"kubernetes.io/projected/21b48dcb-78ca-4545-b2cd-530973a1e4ac-kube-api-access-89gtm\") pod \"tigera-operator-7bc55997bb-8qbk4\" (UID: \"21b48dcb-78ca-4545-b2cd-530973a1e4ac\") " pod="tigera-operator/tigera-operator-7bc55997bb-8qbk4" Jan 30 13:51:44.129098 kubelet[2735]: I0130 13:51:44.129018 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/21b48dcb-78ca-4545-b2cd-530973a1e4ac-var-lib-calico\") pod \"tigera-operator-7bc55997bb-8qbk4\" (UID: \"21b48dcb-78ca-4545-b2cd-530973a1e4ac\") " pod="tigera-operator/tigera-operator-7bc55997bb-8qbk4" Jan 30 13:51:44.338515 containerd[1551]: time="2025-01-30T13:51:44.338464951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8qbk4,Uid:21b48dcb-78ca-4545-b2cd-530973a1e4ac,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:51:44.382756 containerd[1551]: time="2025-01-30T13:51:44.382553472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:44.382756 containerd[1551]: time="2025-01-30T13:51:44.382634224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:44.382756 containerd[1551]: time="2025-01-30T13:51:44.382649864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:44.383558 containerd[1551]: time="2025-01-30T13:51:44.383469206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:44.435913 containerd[1551]: time="2025-01-30T13:51:44.435863990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8qbk4,Uid:21b48dcb-78ca-4545-b2cd-530973a1e4ac,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c7438bc8f2685383a8e8243cd2223af8c75f9974a948a377e4cbede3d741c193\"" Jan 30 13:51:44.438448 containerd[1551]: time="2025-01-30T13:51:44.438415090Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:51:44.713719 kubelet[2735]: E0130 13:51:44.713605 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:44.714173 containerd[1551]: time="2025-01-30T13:51:44.714117688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bsc26,Uid:1e1f084b-0ec9-4a78-bd71-feb85592c546,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:44.736764 containerd[1551]: time="2025-01-30T13:51:44.736658889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:44.736764 containerd[1551]: time="2025-01-30T13:51:44.736719580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:44.736764 containerd[1551]: time="2025-01-30T13:51:44.736739018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:44.736973 containerd[1551]: time="2025-01-30T13:51:44.736847635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:44.778317 containerd[1551]: time="2025-01-30T13:51:44.778274556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bsc26,Uid:1e1f084b-0ec9-4a78-bd71-feb85592c546,Namespace:kube-system,Attempt:0,} returns sandbox id \"f041a8ed4fa5aca488d4874113734dc821f7e4bfd785a88a7617916beea3c8cc\"" Jan 30 13:51:44.778801 kubelet[2735]: E0130 13:51:44.778765 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:44.781009 containerd[1551]: time="2025-01-30T13:51:44.780961697Z" level=info msg="CreateContainer within sandbox \"f041a8ed4fa5aca488d4874113734dc821f7e4bfd785a88a7617916beea3c8cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:51:44.795754 containerd[1551]: time="2025-01-30T13:51:44.795709187Z" level=info msg="CreateContainer within sandbox \"f041a8ed4fa5aca488d4874113734dc821f7e4bfd785a88a7617916beea3c8cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f9292e9073d707d07febeb8362089adf187457d3df205c5c90553712ce22ebd6\"" Jan 30 13:51:44.796440 containerd[1551]: time="2025-01-30T13:51:44.796203642Z" level=info msg="StartContainer for \"f9292e9073d707d07febeb8362089adf187457d3df205c5c90553712ce22ebd6\"" Jan 30 13:51:44.854720 containerd[1551]: time="2025-01-30T13:51:44.854674506Z" level=info msg="StartContainer for \"f9292e9073d707d07febeb8362089adf187457d3df205c5c90553712ce22ebd6\" returns successfully" Jan 30 13:51:44.952284 kubelet[2735]: E0130 13:51:44.952129 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:44.960036 kubelet[2735]: I0130 13:51:44.959838 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bsc26" podStartSLOduration=1.95981945 podStartE2EDuration="1.95981945s" podCreationTimestamp="2025-01-30 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:44.959236329 +0000 UTC m=+16.595861379" watchObservedRunningTime="2025-01-30 13:51:44.95981945 +0000 UTC m=+16.596444480" Jan 30 13:51:47.125771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2364192717.mount: Deactivated successfully. Jan 30 13:51:47.411301 containerd[1551]: time="2025-01-30T13:51:47.411190745Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:47.412003 containerd[1551]: time="2025-01-30T13:51:47.411947379Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:51:47.413051 containerd[1551]: time="2025-01-30T13:51:47.413008616Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:47.415194 containerd[1551]: time="2025-01-30T13:51:47.415147711Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:47.415825 containerd[1551]: time="2025-01-30T13:51:47.415800460Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.977352284s" Jan 30 13:51:47.415860 containerd[1551]: time="2025-01-30T13:51:47.415828165Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:51:47.419688 containerd[1551]: time="2025-01-30T13:51:47.419657088Z" level=info msg="CreateContainer within sandbox \"c7438bc8f2685383a8e8243cd2223af8c75f9974a948a377e4cbede3d741c193\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:51:47.431738 containerd[1551]: time="2025-01-30T13:51:47.431696010Z" level=info msg="CreateContainer within sandbox \"c7438bc8f2685383a8e8243cd2223af8c75f9974a948a377e4cbede3d741c193\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"201bcd370f07e1e8dc9d26a5fac1c94303e9efc9379d3b3dc5524820a336d074\"" Jan 30 13:51:47.432083 containerd[1551]: time="2025-01-30T13:51:47.432056762Z" level=info msg="StartContainer for \"201bcd370f07e1e8dc9d26a5fac1c94303e9efc9379d3b3dc5524820a336d074\"" Jan 30 13:51:47.480377 containerd[1551]: time="2025-01-30T13:51:47.480339619Z" level=info msg="StartContainer for \"201bcd370f07e1e8dc9d26a5fac1c94303e9efc9379d3b3dc5524820a336d074\" returns successfully" Jan 30 13:51:50.266109 kubelet[2735]: I0130 13:51:50.266038 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-8qbk4" podStartSLOduration=3.284745575 podStartE2EDuration="6.266020186s" podCreationTimestamp="2025-01-30 13:51:44 +0000 UTC" firstStartedPulling="2025-01-30 13:51:44.437324758 +0000 UTC m=+16.073949788" lastFinishedPulling="2025-01-30 13:51:47.418599369 +0000 UTC m=+19.055224399" observedRunningTime="2025-01-30 13:51:47.966579436 +0000 UTC m=+19.603204477" watchObservedRunningTime="2025-01-30 13:51:50.266020186 +0000 UTC m=+21.902645216" Jan 30 13:51:50.267451 kubelet[2735]: I0130 13:51:50.267402 2735 topology_manager.go:215] "Topology Admit Handler" podUID="9b016edd-7188-40fb-b022-a7fa47abad2e" podNamespace="calico-system" podName="calico-typha-7f959666bb-k4mwz" Jan 30 13:51:50.293747 kubelet[2735]: I0130 13:51:50.293675 2735 topology_manager.go:215] "Topology Admit Handler" podUID="e236219b-3590-44d7-9dd4-fc7a842921ee" podNamespace="calico-system" podName="calico-node-bb7gp" Jan 30 13:51:50.372181 kubelet[2735]: I0130 13:51:50.372137 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9b016edd-7188-40fb-b022-a7fa47abad2e-typha-certs\") pod \"calico-typha-7f959666bb-k4mwz\" (UID: \"9b016edd-7188-40fb-b022-a7fa47abad2e\") " pod="calico-system/calico-typha-7f959666bb-k4mwz" Jan 30 13:51:50.372181 kubelet[2735]: I0130 13:51:50.372185 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-cni-net-dir\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372327 kubelet[2735]: I0130 13:51:50.372203 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-xtables-lock\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372327 kubelet[2735]: I0130 13:51:50.372215 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-cni-log-dir\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372327 kubelet[2735]: I0130 13:51:50.372272 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bsn\" (UniqueName: \"kubernetes.io/projected/e236219b-3590-44d7-9dd4-fc7a842921ee-kube-api-access-p7bsn\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372327 kubelet[2735]: I0130 13:51:50.372291 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns8kb\" (UniqueName: \"kubernetes.io/projected/9b016edd-7188-40fb-b022-a7fa47abad2e-kube-api-access-ns8kb\") pod \"calico-typha-7f959666bb-k4mwz\" (UID: \"9b016edd-7188-40fb-b022-a7fa47abad2e\") " pod="calico-system/calico-typha-7f959666bb-k4mwz" Jan 30 13:51:50.372327 kubelet[2735]: I0130 13:51:50.372311 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-lib-modules\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372443 kubelet[2735]: I0130 13:51:50.372327 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b016edd-7188-40fb-b022-a7fa47abad2e-tigera-ca-bundle\") pod \"calico-typha-7f959666bb-k4mwz\" (UID: \"9b016edd-7188-40fb-b022-a7fa47abad2e\") " pod="calico-system/calico-typha-7f959666bb-k4mwz" Jan 30 13:51:50.372443 kubelet[2735]: I0130 13:51:50.372343 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-var-run-calico\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372443 kubelet[2735]: I0130 13:51:50.372357 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-flexvol-driver-host\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372443 kubelet[2735]: I0130 13:51:50.372374 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e236219b-3590-44d7-9dd4-fc7a842921ee-node-certs\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372443 kubelet[2735]: I0130 13:51:50.372388 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e236219b-3590-44d7-9dd4-fc7a842921ee-tigera-ca-bundle\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372559 kubelet[2735]: I0130 13:51:50.372406 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-var-lib-calico\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372559 kubelet[2735]: I0130 13:51:50.372432 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-cni-bin-dir\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.372559 kubelet[2735]: I0130 13:51:50.372448 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e236219b-3590-44d7-9dd4-fc7a842921ee-policysync\") pod \"calico-node-bb7gp\" (UID: \"e236219b-3590-44d7-9dd4-fc7a842921ee\") " pod="calico-system/calico-node-bb7gp" Jan 30 13:51:50.406048 kubelet[2735]: I0130 13:51:50.405694 2735 topology_manager.go:215] "Topology Admit Handler" podUID="ef0ae419-d122-4e1e-bebf-46a1a780d55b" podNamespace="calico-system" podName="csi-node-driver-zgrtj" Jan 30 13:51:50.406048 kubelet[2735]: E0130 13:51:50.405936 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgrtj" podUID="ef0ae419-d122-4e1e-bebf-46a1a780d55b" Jan 30 13:51:50.474145 kubelet[2735]: I0130 13:51:50.474101 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef0ae419-d122-4e1e-bebf-46a1a780d55b-socket-dir\") pod \"csi-node-driver-zgrtj\" (UID: \"ef0ae419-d122-4e1e-bebf-46a1a780d55b\") " pod="calico-system/csi-node-driver-zgrtj" Jan 30 13:51:50.474300 kubelet[2735]: I0130 13:51:50.474210 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef0ae419-d122-4e1e-bebf-46a1a780d55b-kubelet-dir\") pod \"csi-node-driver-zgrtj\" (UID: \"ef0ae419-d122-4e1e-bebf-46a1a780d55b\") " pod="calico-system/csi-node-driver-zgrtj" Jan 30 13:51:50.474300 kubelet[2735]: I0130 13:51:50.474243 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef0ae419-d122-4e1e-bebf-46a1a780d55b-registration-dir\") pod \"csi-node-driver-zgrtj\" (UID: \"ef0ae419-d122-4e1e-bebf-46a1a780d55b\") " pod="calico-system/csi-node-driver-zgrtj" Jan 30 13:51:50.474350 kubelet[2735]: I0130 13:51:50.474287 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb544\" (UniqueName: \"kubernetes.io/projected/ef0ae419-d122-4e1e-bebf-46a1a780d55b-kube-api-access-fb544\") pod \"csi-node-driver-zgrtj\" (UID: \"ef0ae419-d122-4e1e-bebf-46a1a780d55b\") " pod="calico-system/csi-node-driver-zgrtj" Jan 30 13:51:50.474376 kubelet[2735]: I0130 13:51:50.474356 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ef0ae419-d122-4e1e-bebf-46a1a780d55b-varrun\") pod \"csi-node-driver-zgrtj\" (UID: \"ef0ae419-d122-4e1e-bebf-46a1a780d55b\") " pod="calico-system/csi-node-driver-zgrtj" Jan 30 13:51:50.485396 kubelet[2735]: E0130 13:51:50.485375 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.487561 kubelet[2735]: W0130 13:51:50.485469 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.487561 kubelet[2735]: E0130 13:51:50.485500 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.488330 kubelet[2735]: E0130 13:51:50.488298 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.488330 kubelet[2735]: W0130 13:51:50.488313 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.488330 kubelet[2735]: E0130 13:51:50.488330 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.488526 kubelet[2735]: E0130 13:51:50.488511 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.488526 kubelet[2735]: W0130 13:51:50.488523 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.488582 kubelet[2735]: E0130 13:51:50.488532 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.490596 kubelet[2735]: E0130 13:51:50.490559 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.490596 kubelet[2735]: W0130 13:51:50.490578 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.490681 kubelet[2735]: E0130 13:51:50.490598 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.575397 kubelet[2735]: E0130 13:51:50.575250 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:50.575397 kubelet[2735]: E0130 13:51:50.575374 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.575397 kubelet[2735]: W0130 13:51:50.575387 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.575397 kubelet[2735]: E0130 13:51:50.575403 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.576492 kubelet[2735]: E0130 13:51:50.576448 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.576492 kubelet[2735]: W0130 13:51:50.576489 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.576580 kubelet[2735]: E0130 13:51:50.576532 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.576816 kubelet[2735]: E0130 13:51:50.576703 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.576816 kubelet[2735]: W0130 13:51:50.576715 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.576816 kubelet[2735]: E0130 13:51:50.576738 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.577200 kubelet[2735]: E0130 13:51:50.577155 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.577200 kubelet[2735]: W0130 13:51:50.577193 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.577271 containerd[1551]: time="2025-01-30T13:51:50.577152509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f959666bb-k4mwz,Uid:9b016edd-7188-40fb-b022-a7fa47abad2e,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:50.577608 kubelet[2735]: E0130 13:51:50.577210 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.578764 kubelet[2735]: E0130 13:51:50.578741 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.578764 kubelet[2735]: W0130 13:51:50.578761 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.578866 kubelet[2735]: E0130 13:51:50.578775 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.579078 kubelet[2735]: E0130 13:51:50.579062 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.579078 kubelet[2735]: W0130 13:51:50.579075 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.579153 kubelet[2735]: E0130 13:51:50.579084 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.581216 kubelet[2735]: E0130 13:51:50.581089 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.581216 kubelet[2735]: W0130 13:51:50.581211 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.581356 kubelet[2735]: E0130 13:51:50.581227 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.581542 kubelet[2735]: E0130 13:51:50.581526 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.581542 kubelet[2735]: W0130 13:51:50.581540 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.581732 kubelet[2735]: E0130 13:51:50.581643 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.582206 kubelet[2735]: E0130 13:51:50.582187 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.582206 kubelet[2735]: W0130 13:51:50.582203 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.582545 kubelet[2735]: E0130 13:51:50.582395 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.582545 kubelet[2735]: E0130 13:51:50.582396 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.582545 kubelet[2735]: W0130 13:51:50.582404 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.582545 kubelet[2735]: E0130 13:51:50.582437 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.582947 kubelet[2735]: E0130 13:51:50.582801 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.582947 kubelet[2735]: W0130 13:51:50.582813 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.582947 kubelet[2735]: E0130 13:51:50.582857 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.583077 kubelet[2735]: E0130 13:51:50.583060 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.583139 kubelet[2735]: W0130 13:51:50.583083 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.583139 kubelet[2735]: E0130 13:51:50.583104 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.583363 kubelet[2735]: E0130 13:51:50.583345 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.583363 kubelet[2735]: W0130 13:51:50.583356 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.583418 kubelet[2735]: E0130 13:51:50.583393 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.583594 kubelet[2735]: E0130 13:51:50.583579 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.583594 kubelet[2735]: W0130 13:51:50.583590 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.583659 kubelet[2735]: E0130 13:51:50.583616 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.583836 kubelet[2735]: E0130 13:51:50.583804 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.583836 kubelet[2735]: W0130 13:51:50.583815 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.583900 kubelet[2735]: E0130 13:51:50.583871 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.584014 kubelet[2735]: E0130 13:51:50.583989 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.584014 kubelet[2735]: W0130 13:51:50.583999 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.584100 kubelet[2735]: E0130 13:51:50.584048 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.584261 kubelet[2735]: E0130 13:51:50.584246 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.584261 kubelet[2735]: W0130 13:51:50.584257 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.584307 kubelet[2735]: E0130 13:51:50.584272 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.584517 kubelet[2735]: E0130 13:51:50.584501 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.584517 kubelet[2735]: W0130 13:51:50.584514 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.584576 kubelet[2735]: E0130 13:51:50.584530 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.584770 kubelet[2735]: E0130 13:51:50.584755 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.584770 kubelet[2735]: W0130 13:51:50.584768 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.584816 kubelet[2735]: E0130 13:51:50.584783 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.584995 kubelet[2735]: E0130 13:51:50.584983 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.584995 kubelet[2735]: W0130 13:51:50.584994 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.585058 kubelet[2735]: E0130 13:51:50.585010 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.585266 kubelet[2735]: E0130 13:51:50.585253 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.585298 kubelet[2735]: W0130 13:51:50.585276 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.585298 kubelet[2735]: E0130 13:51:50.585293 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.585489 kubelet[2735]: E0130 13:51:50.585478 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.585489 kubelet[2735]: W0130 13:51:50.585488 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.585559 kubelet[2735]: E0130 13:51:50.585502 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.585760 kubelet[2735]: E0130 13:51:50.585737 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.585760 kubelet[2735]: W0130 13:51:50.585749 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.585760 kubelet[2735]: E0130 13:51:50.585765 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.586126 kubelet[2735]: E0130 13:51:50.586110 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.586126 kubelet[2735]: W0130 13:51:50.586123 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.586251 kubelet[2735]: E0130 13:51:50.586137 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.586371 kubelet[2735]: E0130 13:51:50.586347 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.586371 kubelet[2735]: W0130 13:51:50.586365 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.586418 kubelet[2735]: E0130 13:51:50.586375 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.590725 kubelet[2735]: E0130 13:51:50.590696 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:50.590725 kubelet[2735]: W0130 13:51:50.590711 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:50.590725 kubelet[2735]: E0130 13:51:50.590720 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:50.598977 kubelet[2735]: E0130 13:51:50.598882 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:50.599632 containerd[1551]: time="2025-01-30T13:51:50.599423145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bb7gp,Uid:e236219b-3590-44d7-9dd4-fc7a842921ee,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:50.603876 containerd[1551]: time="2025-01-30T13:51:50.603795224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:50.603876 containerd[1551]: time="2025-01-30T13:51:50.603856093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:50.603975 containerd[1551]: time="2025-01-30T13:51:50.603877546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:50.604270 containerd[1551]: time="2025-01-30T13:51:50.603961330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:50.622353 containerd[1551]: time="2025-01-30T13:51:50.622240355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:50.622353 containerd[1551]: time="2025-01-30T13:51:50.622295843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:50.622353 containerd[1551]: time="2025-01-30T13:51:50.622311825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:50.622464 containerd[1551]: time="2025-01-30T13:51:50.622407172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:50.655715 containerd[1551]: time="2025-01-30T13:51:50.655621301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f959666bb-k4mwz,Uid:9b016edd-7188-40fb-b022-a7fa47abad2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073\"" Jan 30 13:51:50.657805 kubelet[2735]: E0130 13:51:50.657613 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:50.660232 containerd[1551]: time="2025-01-30T13:51:50.660200385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bb7gp,Uid:e236219b-3590-44d7-9dd4-fc7a842921ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"872eb4fa7d64a824ff6b9c5e4f6f9e67813f3fd414d3d54ee6be2570ba774845\"" Jan 30 13:51:50.660685 kubelet[2735]: E0130 13:51:50.660665 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:50.662253 containerd[1551]: time="2025-01-30T13:51:50.661960669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:51:51.961187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120274071.mount: Deactivated successfully. Jan 30 13:51:52.308069 containerd[1551]: time="2025-01-30T13:51:52.308015391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:52.308744 containerd[1551]: time="2025-01-30T13:51:52.308698937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:51:52.309837 containerd[1551]: time="2025-01-30T13:51:52.309800590Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:52.312135 containerd[1551]: time="2025-01-30T13:51:52.312058282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:52.312946 containerd[1551]: time="2025-01-30T13:51:52.312574371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.650583603s" Jan 30 13:51:52.312946 containerd[1551]: time="2025-01-30T13:51:52.312604330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:51:52.314057 containerd[1551]: time="2025-01-30T13:51:52.313651526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:51:52.319901 containerd[1551]: time="2025-01-30T13:51:52.319777680Z" level=info msg="CreateContainer within sandbox \"e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:51:52.328993 containerd[1551]: time="2025-01-30T13:51:52.328966541Z" level=info msg="CreateContainer within sandbox \"e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\"" Jan 30 13:51:52.329409 containerd[1551]: time="2025-01-30T13:51:52.329380650Z" level=info msg="StartContainer for \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\"" Jan 30 13:51:52.394726 containerd[1551]: time="2025-01-30T13:51:52.394680310Z" level=info msg="StartContainer for \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\" returns successfully" Jan 30 13:51:52.442596 kubelet[2735]: E0130 13:51:52.442226 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgrtj" podUID="ef0ae419-d122-4e1e-bebf-46a1a780d55b" Jan 30 13:51:52.969310 kubelet[2735]: E0130 13:51:52.969285 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:52.978633 kubelet[2735]: E0130 13:51:52.978608 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.978633 kubelet[2735]: W0130 13:51:52.978627 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.978633 kubelet[2735]: E0130 13:51:52.978644 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.978934 kubelet[2735]: E0130 13:51:52.978914 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.978934 kubelet[2735]: W0130 13:51:52.978933 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.978988 kubelet[2735]: E0130 13:51:52.978941 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.979170 kubelet[2735]: E0130 13:51:52.979147 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.979207 kubelet[2735]: W0130 13:51:52.979173 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.979207 kubelet[2735]: E0130 13:51:52.979181 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.979399 kubelet[2735]: E0130 13:51:52.979376 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.979399 kubelet[2735]: W0130 13:51:52.979388 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.979399 kubelet[2735]: E0130 13:51:52.979400 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.979641 kubelet[2735]: E0130 13:51:52.979628 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.979641 kubelet[2735]: W0130 13:51:52.979639 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.979692 kubelet[2735]: E0130 13:51:52.979647 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.979861 kubelet[2735]: E0130 13:51:52.979840 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.979861 kubelet[2735]: W0130 13:51:52.979852 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.979861 kubelet[2735]: E0130 13:51:52.979859 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.980061 kubelet[2735]: E0130 13:51:52.980048 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.980061 kubelet[2735]: W0130 13:51:52.980059 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.980103 kubelet[2735]: E0130 13:51:52.980066 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.980291 kubelet[2735]: E0130 13:51:52.980278 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.980291 kubelet[2735]: W0130 13:51:52.980288 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.980350 kubelet[2735]: E0130 13:51:52.980297 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.980488 kubelet[2735]: E0130 13:51:52.980476 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.980488 kubelet[2735]: W0130 13:51:52.980486 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.980533 kubelet[2735]: E0130 13:51:52.980494 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.980683 kubelet[2735]: E0130 13:51:52.980670 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.980683 kubelet[2735]: W0130 13:51:52.980680 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.980727 kubelet[2735]: E0130 13:51:52.980688 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.980874 kubelet[2735]: E0130 13:51:52.980862 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.980874 kubelet[2735]: W0130 13:51:52.980872 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.980939 kubelet[2735]: E0130 13:51:52.980879 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.981064 kubelet[2735]: E0130 13:51:52.981049 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.981064 kubelet[2735]: W0130 13:51:52.981058 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.981064 kubelet[2735]: E0130 13:51:52.981065 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.981301 kubelet[2735]: E0130 13:51:52.981286 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.981301 kubelet[2735]: W0130 13:51:52.981297 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.981366 kubelet[2735]: E0130 13:51:52.981305 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.981507 kubelet[2735]: E0130 13:51:52.981491 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.981507 kubelet[2735]: W0130 13:51:52.981502 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.981507 kubelet[2735]: E0130 13:51:52.981509 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.981726 kubelet[2735]: E0130 13:51:52.981702 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.981726 kubelet[2735]: W0130 13:51:52.981712 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.981726 kubelet[2735]: E0130 13:51:52.981719 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.998146 kubelet[2735]: E0130 13:51:52.998110 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.998146 kubelet[2735]: W0130 13:51:52.998135 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.998260 kubelet[2735]: E0130 13:51:52.998155 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.998467 kubelet[2735]: E0130 13:51:52.998440 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.998496 kubelet[2735]: W0130 13:51:52.998464 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.998531 kubelet[2735]: E0130 13:51:52.998494 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.998770 kubelet[2735]: E0130 13:51:52.998752 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.998770 kubelet[2735]: W0130 13:51:52.998765 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.998821 kubelet[2735]: E0130 13:51:52.998783 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.999010 kubelet[2735]: E0130 13:51:52.998992 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.999010 kubelet[2735]: W0130 13:51:52.999004 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.999065 kubelet[2735]: E0130 13:51:52.999020 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.999291 kubelet[2735]: E0130 13:51:52.999263 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.999291 kubelet[2735]: W0130 13:51:52.999279 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.999381 kubelet[2735]: E0130 13:51:52.999297 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.999525 kubelet[2735]: E0130 13:51:52.999499 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.999525 kubelet[2735]: W0130 13:51:52.999514 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.999610 kubelet[2735]: E0130 13:51:52.999529 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.999744 kubelet[2735]: E0130 13:51:52.999726 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.999744 kubelet[2735]: W0130 13:51:52.999738 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.999807 kubelet[2735]: E0130 13:51:52.999751 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:52.999930 kubelet[2735]: E0130 13:51:52.999917 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:52.999930 kubelet[2735]: W0130 13:51:52.999929 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:52.999981 kubelet[2735]: E0130 13:51:52.999941 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.000148 kubelet[2735]: E0130 13:51:53.000135 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.000148 kubelet[2735]: W0130 13:51:53.000146 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.000218 kubelet[2735]: E0130 13:51:53.000172 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.000399 kubelet[2735]: E0130 13:51:53.000380 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.000399 kubelet[2735]: W0130 13:51:53.000394 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.000474 kubelet[2735]: E0130 13:51:53.000411 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.000653 kubelet[2735]: E0130 13:51:53.000636 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.000683 kubelet[2735]: W0130 13:51:53.000653 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.000683 kubelet[2735]: E0130 13:51:53.000669 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.000881 kubelet[2735]: E0130 13:51:53.000868 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.000916 kubelet[2735]: W0130 13:51:53.000880 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.000916 kubelet[2735]: E0130 13:51:53.000895 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.001090 kubelet[2735]: E0130 13:51:53.001079 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.001090 kubelet[2735]: W0130 13:51:53.001088 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.001133 kubelet[2735]: E0130 13:51:53.001100 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.001320 kubelet[2735]: E0130 13:51:53.001310 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.001320 kubelet[2735]: W0130 13:51:53.001319 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.001430 kubelet[2735]: E0130 13:51:53.001348 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.001502 kubelet[2735]: E0130 13:51:53.001492 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.001502 kubelet[2735]: W0130 13:51:53.001500 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.001544 kubelet[2735]: E0130 13:51:53.001532 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.001683 kubelet[2735]: E0130 13:51:53.001670 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.001683 kubelet[2735]: W0130 13:51:53.001681 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.001736 kubelet[2735]: E0130 13:51:53.001693 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.001929 kubelet[2735]: E0130 13:51:53.001911 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.001952 kubelet[2735]: W0130 13:51:53.001927 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.001952 kubelet[2735]: E0130 13:51:53.001944 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.002166 kubelet[2735]: E0130 13:51:53.002139 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:53.002166 kubelet[2735]: W0130 13:51:53.002152 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:53.002222 kubelet[2735]: E0130 13:51:53.002172 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:53.634615 containerd[1551]: time="2025-01-30T13:51:53.634574222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.635641 containerd[1551]: time="2025-01-30T13:51:53.635599712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:51:53.636720 containerd[1551]: time="2025-01-30T13:51:53.636674610Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.638792 containerd[1551]: time="2025-01-30T13:51:53.638765961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.639450 containerd[1551]: time="2025-01-30T13:51:53.639351563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.325673966s" Jan 30 13:51:53.639450 containerd[1551]: time="2025-01-30T13:51:53.639387202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:51:53.641137 containerd[1551]: time="2025-01-30T13:51:53.641103261Z" level=info msg="CreateContainer within sandbox \"872eb4fa7d64a824ff6b9c5e4f6f9e67813f3fd414d3d54ee6be2570ba774845\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:51:53.655529 containerd[1551]: time="2025-01-30T13:51:53.655494950Z" level=info msg="CreateContainer within sandbox \"872eb4fa7d64a824ff6b9c5e4f6f9e67813f3fd414d3d54ee6be2570ba774845\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"96dc65209448e9bbdce1d8a1ea9399a0d352f9bb128ebacb28514d4c997ea144\"" Jan 30 13:51:53.655929 containerd[1551]: time="2025-01-30T13:51:53.655900502Z" level=info msg="StartContainer for \"96dc65209448e9bbdce1d8a1ea9399a0d352f9bb128ebacb28514d4c997ea144\"" Jan 30 13:51:53.711772 containerd[1551]: time="2025-01-30T13:51:53.711739521Z" level=info msg="StartContainer for \"96dc65209448e9bbdce1d8a1ea9399a0d352f9bb128ebacb28514d4c997ea144\" returns successfully" Jan 30 13:51:54.015792 kubelet[2735]: I0130 13:51:53.971365 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:51:54.015792 kubelet[2735]: E0130 13:51:53.971674 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:54.015792 kubelet[2735]: E0130 13:51:53.972209 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:54.015792 kubelet[2735]: I0130 13:51:53.996758 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f959666bb-k4mwz" podStartSLOduration=2.341591686 podStartE2EDuration="3.996740621s" podCreationTimestamp="2025-01-30 13:51:50 +0000 UTC" firstStartedPulling="2025-01-30 13:51:50.658186824 +0000 UTC m=+22.294811854" lastFinishedPulling="2025-01-30 13:51:52.313335749 +0000 UTC m=+23.949960789" observedRunningTime="2025-01-30 13:51:52.977493489 +0000 UTC m=+24.614118519" watchObservedRunningTime="2025-01-30 13:51:53.996740621 +0000 UTC m=+25.633365651" Jan 30 13:51:54.121517 containerd[1551]: time="2025-01-30T13:51:54.121455669Z" level=info msg="shim disconnected" id=96dc65209448e9bbdce1d8a1ea9399a0d352f9bb128ebacb28514d4c997ea144 namespace=k8s.io Jan 30 13:51:54.121517 containerd[1551]: time="2025-01-30T13:51:54.121505436Z" level=warning msg="cleaning up after shim disconnected" id=96dc65209448e9bbdce1d8a1ea9399a0d352f9bb128ebacb28514d4c997ea144 namespace=k8s.io Jan 30 13:51:54.121517 containerd[1551]: time="2025-01-30T13:51:54.121514634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:51:54.440102 kubelet[2735]: E0130 13:51:54.439970 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgrtj" podUID="ef0ae419-d122-4e1e-bebf-46a1a780d55b" Jan 30 13:51:54.651481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96dc65209448e9bbdce1d8a1ea9399a0d352f9bb128ebacb28514d4c997ea144-rootfs.mount: Deactivated successfully. Jan 30 13:51:54.974648 kubelet[2735]: E0130 13:51:54.974616 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:54.975472 containerd[1551]: time="2025-01-30T13:51:54.975432983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:51:56.440626 kubelet[2735]: E0130 13:51:56.440307 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgrtj" podUID="ef0ae419-d122-4e1e-bebf-46a1a780d55b" Jan 30 13:51:57.967004 containerd[1551]: time="2025-01-30T13:51:57.966932582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.967752 containerd[1551]: time="2025-01-30T13:51:57.967710090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:51:57.968899 containerd[1551]: time="2025-01-30T13:51:57.968863257Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.971278 containerd[1551]: time="2025-01-30T13:51:57.971235628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.971894 containerd[1551]: time="2025-01-30T13:51:57.971854217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.996385975s" Jan 30 13:51:57.971894 containerd[1551]: time="2025-01-30T13:51:57.971883484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:51:57.974113 containerd[1551]: time="2025-01-30T13:51:57.974073962Z" level=info msg="CreateContainer within sandbox \"872eb4fa7d64a824ff6b9c5e4f6f9e67813f3fd414d3d54ee6be2570ba774845\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:51:57.990075 containerd[1551]: time="2025-01-30T13:51:57.990032443Z" level=info msg="CreateContainer within sandbox \"872eb4fa7d64a824ff6b9c5e4f6f9e67813f3fd414d3d54ee6be2570ba774845\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"34d12bad9ff253bfdac95e65ec1cee2de18e3dce7a80928811d96b8c1ca5974b\"" Jan 30 13:51:57.995014 containerd[1551]: time="2025-01-30T13:51:57.994972985Z" level=info msg="StartContainer for \"34d12bad9ff253bfdac95e65ec1cee2de18e3dce7a80928811d96b8c1ca5974b\"" Jan 30 13:51:58.053038 containerd[1551]: time="2025-01-30T13:51:58.053001581Z" level=info msg="StartContainer for \"34d12bad9ff253bfdac95e65ec1cee2de18e3dce7a80928811d96b8c1ca5974b\" returns successfully" Jan 30 13:51:59.212297 kubelet[2735]: E0130 13:51:59.212243 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgrtj" podUID="ef0ae419-d122-4e1e-bebf-46a1a780d55b" Jan 30 13:51:59.229376 kubelet[2735]: E0130 13:51:59.228200 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:59.235458 systemd[1]: Started sshd@7-10.0.0.158:22-10.0.0.1:58702.service - OpenSSH per-connection server daemon (10.0.0.1:58702). Jan 30 13:51:59.290374 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 58702 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:51:59.292079 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:59.297148 systemd-logind[1529]: New session 8 of user core. Jan 30 13:51:59.304480 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:51:59.441152 sshd[3441]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:59.445529 systemd[1]: sshd@7-10.0.0.158:22-10.0.0.1:58702.service: Deactivated successfully. Jan 30 13:51:59.447809 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:51:59.448615 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:51:59.449535 systemd-logind[1529]: Removed session 8. Jan 30 13:51:59.865487 containerd[1551]: time="2025-01-30T13:51:59.865370877Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 30 13:51:59.889002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34d12bad9ff253bfdac95e65ec1cee2de18e3dce7a80928811d96b8c1ca5974b-rootfs.mount: Deactivated successfully. Jan 30 13:51:59.942440 kubelet[2735]: I0130 13:51:59.942396 2735 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:52:00.001253 systemd-resolved[1463]: Under memory pressure, flushing caches. Jan 30 13:52:00.001297 systemd-resolved[1463]: Flushed all caches. Jan 30 13:52:00.003195 systemd-journald[1159]: Under memory pressure, flushing caches. Jan 30 13:52:00.041830 kubelet[2735]: I0130 13:52:00.041788 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:00.042548 kubelet[2735]: E0130 13:52:00.042507 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:00.229253 kubelet[2735]: E0130 13:52:00.229145 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:00.446484 kubelet[2735]: E0130 13:52:00.446444 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:00.463409 containerd[1551]: time="2025-01-30T13:52:00.462775868Z" level=info msg="shim disconnected" id=34d12bad9ff253bfdac95e65ec1cee2de18e3dce7a80928811d96b8c1ca5974b namespace=k8s.io Jan 30 13:52:00.463409 containerd[1551]: time="2025-01-30T13:52:00.462839441Z" level=warning msg="cleaning up after shim disconnected" id=34d12bad9ff253bfdac95e65ec1cee2de18e3dce7a80928811d96b8c1ca5974b namespace=k8s.io Jan 30 13:52:00.463409 containerd[1551]: time="2025-01-30T13:52:00.462852175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:00.473469 kubelet[2735]: I0130 13:52:00.465905 2735 topology_manager.go:215] "Topology Admit Handler" podUID="decac6b5-b980-4127-9316-ce25e5c0883a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-br7dh" Jan 30 13:52:00.477846 kubelet[2735]: I0130 13:52:00.476320 2735 topology_manager.go:215] "Topology Admit Handler" podUID="8e61af85-1ee2-489b-aa60-bd8bc7907e82" podNamespace="kube-system" podName="coredns-7db6d8ff4d-58w8t" Jan 30 13:52:00.477846 kubelet[2735]: I0130 13:52:00.476460 2735 topology_manager.go:215] "Topology Admit Handler" podUID="d6de7ab5-0ba7-46df-aa33-d3bbedd226fa" podNamespace="calico-apiserver" podName="calico-apiserver-58896df755-7x46h" Jan 30 13:52:00.477846 kubelet[2735]: I0130 13:52:00.476575 2735 topology_manager.go:215] "Topology Admit Handler" podUID="a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0" podNamespace="calico-system" podName="calico-kube-controllers-5c7c58cdc5-s4npw" Jan 30 13:52:00.477846 kubelet[2735]: I0130 13:52:00.476704 2735 topology_manager.go:215] "Topology Admit Handler" podUID="78380a0f-05ec-4046-8a96-ad8ade5588e4" podNamespace="calico-apiserver" podName="calico-apiserver-58896df755-xxq4k" Jan 30 13:52:00.514024 kubelet[2735]: I0130 13:52:00.513975 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bp8d\" (UniqueName: \"kubernetes.io/projected/8e61af85-1ee2-489b-aa60-bd8bc7907e82-kube-api-access-7bp8d\") pod \"coredns-7db6d8ff4d-58w8t\" (UID: \"8e61af85-1ee2-489b-aa60-bd8bc7907e82\") " pod="kube-system/coredns-7db6d8ff4d-58w8t" Jan 30 13:52:00.514024 kubelet[2735]: I0130 13:52:00.514022 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0-tigera-ca-bundle\") pod \"calico-kube-controllers-5c7c58cdc5-s4npw\" (UID: \"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0\") " pod="calico-system/calico-kube-controllers-5c7c58cdc5-s4npw" Jan 30 13:52:00.514225 kubelet[2735]: I0130 13:52:00.514048 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e61af85-1ee2-489b-aa60-bd8bc7907e82-config-volume\") pod \"coredns-7db6d8ff4d-58w8t\" (UID: \"8e61af85-1ee2-489b-aa60-bd8bc7907e82\") " pod="kube-system/coredns-7db6d8ff4d-58w8t" Jan 30 13:52:00.514225 kubelet[2735]: I0130 13:52:00.514082 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78380a0f-05ec-4046-8a96-ad8ade5588e4-calico-apiserver-certs\") pod \"calico-apiserver-58896df755-xxq4k\" (UID: \"78380a0f-05ec-4046-8a96-ad8ade5588e4\") " pod="calico-apiserver/calico-apiserver-58896df755-xxq4k" Jan 30 13:52:00.514225 kubelet[2735]: I0130 13:52:00.514179 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/decac6b5-b980-4127-9316-ce25e5c0883a-config-volume\") pod \"coredns-7db6d8ff4d-br7dh\" (UID: \"decac6b5-b980-4127-9316-ce25e5c0883a\") " pod="kube-system/coredns-7db6d8ff4d-br7dh" Jan 30 13:52:00.514225 kubelet[2735]: I0130 13:52:00.514209 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjmb\" (UniqueName: \"kubernetes.io/projected/78380a0f-05ec-4046-8a96-ad8ade5588e4-kube-api-access-2mjmb\") pod \"calico-apiserver-58896df755-xxq4k\" (UID: \"78380a0f-05ec-4046-8a96-ad8ade5588e4\") " pod="calico-apiserver/calico-apiserver-58896df755-xxq4k" Jan 30 13:52:00.514325 kubelet[2735]: I0130 13:52:00.514232 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbd57\" (UniqueName: \"kubernetes.io/projected/d6de7ab5-0ba7-46df-aa33-d3bbedd226fa-kube-api-access-dbd57\") pod \"calico-apiserver-58896df755-7x46h\" (UID: \"d6de7ab5-0ba7-46df-aa33-d3bbedd226fa\") " pod="calico-apiserver/calico-apiserver-58896df755-7x46h" Jan 30 13:52:00.514325 kubelet[2735]: I0130 13:52:00.514254 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8295\" (UniqueName: \"kubernetes.io/projected/decac6b5-b980-4127-9316-ce25e5c0883a-kube-api-access-z8295\") pod \"coredns-7db6d8ff4d-br7dh\" (UID: \"decac6b5-b980-4127-9316-ce25e5c0883a\") " pod="kube-system/coredns-7db6d8ff4d-br7dh" Jan 30 13:52:00.514325 kubelet[2735]: I0130 13:52:00.514289 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6de7ab5-0ba7-46df-aa33-d3bbedd226fa-calico-apiserver-certs\") pod \"calico-apiserver-58896df755-7x46h\" (UID: \"d6de7ab5-0ba7-46df-aa33-d3bbedd226fa\") " pod="calico-apiserver/calico-apiserver-58896df755-7x46h" Jan 30 13:52:00.514325 kubelet[2735]: I0130 13:52:00.514307 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhj82\" (UniqueName: \"kubernetes.io/projected/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0-kube-api-access-jhj82\") pod \"calico-kube-controllers-5c7c58cdc5-s4npw\" (UID: \"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0\") " pod="calico-system/calico-kube-controllers-5c7c58cdc5-s4npw" Jan 30 13:52:00.786299 containerd[1551]: time="2025-01-30T13:52:00.786190081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58896df755-xxq4k,Uid:78380a0f-05ec-4046-8a96-ad8ade5588e4,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:52:00.790428 kubelet[2735]: E0130 13:52:00.790390 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:00.790910 containerd[1551]: time="2025-01-30T13:52:00.790651482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7c58cdc5-s4npw,Uid:a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0,Namespace:calico-system,Attempt:0,}" Jan 30 13:52:00.790910 containerd[1551]: time="2025-01-30T13:52:00.790781343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-br7dh,Uid:decac6b5-b980-4127-9316-ce25e5c0883a,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:00.794998 kubelet[2735]: E0130 13:52:00.794969 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:00.795347 containerd[1551]: time="2025-01-30T13:52:00.795222004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58896df755-7x46h,Uid:d6de7ab5-0ba7-46df-aa33-d3bbedd226fa,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:52:00.795347 containerd[1551]: time="2025-01-30T13:52:00.795303001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-58w8t,Uid:8e61af85-1ee2-489b-aa60-bd8bc7907e82,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:01.046782 containerd[1551]: time="2025-01-30T13:52:01.046661282Z" level=error msg="Failed to destroy network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.047881 containerd[1551]: time="2025-01-30T13:52:01.047037878Z" level=error msg="Failed to destroy network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.047881 containerd[1551]: time="2025-01-30T13:52:01.047726899Z" level=error msg="encountered an error cleaning up failed sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.047881 containerd[1551]: time="2025-01-30T13:52:01.047770823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58896df755-7x46h,Uid:d6de7ab5-0ba7-46df-aa33-d3bbedd226fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.048374 containerd[1551]: time="2025-01-30T13:52:01.048351244Z" level=error msg="encountered an error cleaning up failed sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.048457 containerd[1551]: time="2025-01-30T13:52:01.048437931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-58w8t,Uid:8e61af85-1ee2-489b-aa60-bd8bc7907e82,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.048539 containerd[1551]: time="2025-01-30T13:52:01.048445767Z" level=error msg="Failed to destroy network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.048908 containerd[1551]: time="2025-01-30T13:52:01.048873562Z" level=error msg="encountered an error cleaning up failed sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.048964 containerd[1551]: time="2025-01-30T13:52:01.048904572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58896df755-xxq4k,Uid:78380a0f-05ec-4046-8a96-ad8ade5588e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.049585 containerd[1551]: time="2025-01-30T13:52:01.049362166Z" level=error msg="Failed to destroy network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.049890 containerd[1551]: time="2025-01-30T13:52:01.049849978Z" level=error msg="encountered an error cleaning up failed sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.049924 containerd[1551]: time="2025-01-30T13:52:01.049902188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-br7dh,Uid:decac6b5-b980-4127-9316-ce25e5c0883a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.050273 containerd[1551]: time="2025-01-30T13:52:01.050246964Z" level=error msg="Failed to destroy network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.050601 containerd[1551]: time="2025-01-30T13:52:01.050562012Z" level=error msg="encountered an error cleaning up failed sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.050635 containerd[1551]: time="2025-01-30T13:52:01.050605927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7c58cdc5-s4npw,Uid:a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.057626 kubelet[2735]: E0130 13:52:01.057572 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.057626 kubelet[2735]: E0130 13:52:01.057591 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.057749 kubelet[2735]: E0130 13:52:01.057631 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.057749 kubelet[2735]: E0130 13:52:01.057651 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c7c58cdc5-s4npw" Jan 30 13:52:01.057749 kubelet[2735]: E0130 13:52:01.057659 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-58w8t" Jan 30 13:52:01.057749 kubelet[2735]: E0130 13:52:01.057667 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58896df755-xxq4k" Jan 30 13:52:01.057850 kubelet[2735]: E0130 13:52:01.057674 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c7c58cdc5-s4npw" Jan 30 13:52:01.057850 kubelet[2735]: E0130 13:52:01.057679 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-58w8t" Jan 30 13:52:01.057850 kubelet[2735]: E0130 13:52:01.057688 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58896df755-xxq4k" Jan 30 13:52:01.057922 kubelet[2735]: E0130 13:52:01.057717 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c7c58cdc5-s4npw_calico-system(a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c7c58cdc5-s4npw_calico-system(a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c7c58cdc5-s4npw" podUID="a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0" Jan 30 13:52:01.057922 kubelet[2735]: E0130 13:52:01.057722 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-58w8t_kube-system(8e61af85-1ee2-489b-aa60-bd8bc7907e82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-58w8t_kube-system(8e61af85-1ee2-489b-aa60-bd8bc7907e82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-58w8t" podUID="8e61af85-1ee2-489b-aa60-bd8bc7907e82" Jan 30 13:52:01.058011 kubelet[2735]: E0130 13:52:01.057726 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58896df755-xxq4k_calico-apiserver(78380a0f-05ec-4046-8a96-ad8ade5588e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58896df755-xxq4k_calico-apiserver(78380a0f-05ec-4046-8a96-ad8ade5588e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58896df755-xxq4k" podUID="78380a0f-05ec-4046-8a96-ad8ade5588e4" Jan 30 13:52:01.058011 kubelet[2735]: E0130 13:52:01.057753 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.058011 kubelet[2735]: E0130 13:52:01.057776 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-br7dh" Jan 30 13:52:01.058103 kubelet[2735]: E0130 13:52:01.057792 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-br7dh" Jan 30 13:52:01.058103 kubelet[2735]: E0130 13:52:01.057821 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-br7dh_kube-system(decac6b5-b980-4127-9316-ce25e5c0883a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-br7dh_kube-system(decac6b5-b980-4127-9316-ce25e5c0883a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-br7dh" podUID="decac6b5-b980-4127-9316-ce25e5c0883a" Jan 30 13:52:01.058103 kubelet[2735]: E0130 13:52:01.057591 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.058211 kubelet[2735]: E0130 13:52:01.057854 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58896df755-7x46h" Jan 30 13:52:01.058211 kubelet[2735]: E0130 13:52:01.057871 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58896df755-7x46h" Jan 30 13:52:01.058211 kubelet[2735]: E0130 13:52:01.057896 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58896df755-7x46h_calico-apiserver(d6de7ab5-0ba7-46df-aa33-d3bbedd226fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58896df755-7x46h_calico-apiserver(d6de7ab5-0ba7-46df-aa33-d3bbedd226fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58896df755-7x46h" podUID="d6de7ab5-0ba7-46df-aa33-d3bbedd226fa" Jan 30 13:52:01.231044 kubelet[2735]: I0130 13:52:01.230999 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:01.231838 kubelet[2735]: I0130 13:52:01.231812 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:01.233513 kubelet[2735]: I0130 13:52:01.233465 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:01.234321 containerd[1551]: time="2025-01-30T13:52:01.234281752Z" level=info msg="StopPodSandbox for \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\"" Jan 30 13:52:01.234543 containerd[1551]: time="2025-01-30T13:52:01.234388748Z" level=info msg="StopPodSandbox for \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\"" Jan 30 13:52:01.235192 containerd[1551]: time="2025-01-30T13:52:01.235021680Z" level=info msg="StopPodSandbox for \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\"" Jan 30 13:52:01.235767 kubelet[2735]: E0130 13:52:01.235710 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:01.237527 containerd[1551]: time="2025-01-30T13:52:01.237509733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:52:01.238060 kubelet[2735]: I0130 13:52:01.238039 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:01.238451 containerd[1551]: time="2025-01-30T13:52:01.238427194Z" level=info msg="StopPodSandbox for \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\"" Jan 30 13:52:01.240479 containerd[1551]: time="2025-01-30T13:52:01.240414110Z" level=info msg="Ensure that sandbox 5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde in task-service has been cleanup successfully" Jan 30 13:52:01.240479 containerd[1551]: time="2025-01-30T13:52:01.240428929Z" level=info msg="Ensure that sandbox 69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e in task-service has been cleanup successfully" Jan 30 13:52:01.240549 containerd[1551]: time="2025-01-30T13:52:01.240412768Z" level=info msg="Ensure that sandbox f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f in task-service has been cleanup successfully" Jan 30 13:52:01.241021 containerd[1551]: time="2025-01-30T13:52:01.240417226Z" level=info msg="Ensure that sandbox e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a in task-service has been cleanup successfully" Jan 30 13:52:01.241706 kubelet[2735]: I0130 13:52:01.241687 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:01.242204 containerd[1551]: time="2025-01-30T13:52:01.242151043Z" level=info msg="StopPodSandbox for \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\"" Jan 30 13:52:01.242780 containerd[1551]: time="2025-01-30T13:52:01.242622162Z" level=info msg="Ensure that sandbox ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69 in task-service has been cleanup successfully" Jan 30 13:52:01.305133 containerd[1551]: time="2025-01-30T13:52:01.304079282Z" level=error msg="StopPodSandbox for \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\" failed" error="failed to destroy network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.306597 containerd[1551]: time="2025-01-30T13:52:01.306530985Z" level=error msg="StopPodSandbox for \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\" failed" error="failed to destroy network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.306761 kubelet[2735]: E0130 13:52:01.306629 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:01.306761 kubelet[2735]: E0130 13:52:01.306687 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69"} Jan 30 13:52:01.306761 kubelet[2735]: E0130 13:52:01.306743 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"decac6b5-b980-4127-9316-ce25e5c0883a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:01.306931 kubelet[2735]: E0130 13:52:01.306767 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"decac6b5-b980-4127-9316-ce25e5c0883a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-br7dh" podUID="decac6b5-b980-4127-9316-ce25e5c0883a" Jan 30 13:52:01.307020 kubelet[2735]: E0130 13:52:01.306908 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:01.307046 containerd[1551]: time="2025-01-30T13:52:01.306907472Z" level=error msg="StopPodSandbox for \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\" failed" error="failed to destroy network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.307223 kubelet[2735]: E0130 13:52:01.307144 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e"} Jan 30 13:52:01.307283 kubelet[2735]: E0130 13:52:01.307236 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:01.307323 kubelet[2735]: E0130 13:52:01.307281 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a"} Jan 30 13:52:01.307323 kubelet[2735]: E0130 13:52:01.307307 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:01.307423 kubelet[2735]: E0130 13:52:01.307324 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c7c58cdc5-s4npw" podUID="a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0" Jan 30 13:52:01.307512 kubelet[2735]: E0130 13:52:01.307209 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6de7ab5-0ba7-46df-aa33-d3bbedd226fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:01.307580 kubelet[2735]: E0130 13:52:01.307531 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6de7ab5-0ba7-46df-aa33-d3bbedd226fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58896df755-7x46h" podUID="d6de7ab5-0ba7-46df-aa33-d3bbedd226fa" Jan 30 13:52:01.324290 containerd[1551]: time="2025-01-30T13:52:01.324241173Z" level=error msg="StopPodSandbox for \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\" failed" error="failed to destroy network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.324522 kubelet[2735]: E0130 13:52:01.324476 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:01.324620 kubelet[2735]: E0130 13:52:01.324523 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f"} Jan 30 13:52:01.324620 kubelet[2735]: E0130 13:52:01.324548 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78380a0f-05ec-4046-8a96-ad8ade5588e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:01.324620 kubelet[2735]: E0130 13:52:01.324568 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78380a0f-05ec-4046-8a96-ad8ade5588e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58896df755-xxq4k" podUID="78380a0f-05ec-4046-8a96-ad8ade5588e4" Jan 30 13:52:01.334633 containerd[1551]: time="2025-01-30T13:52:01.334579201Z" level=error msg="StopPodSandbox for \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\" failed" error="failed to destroy network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.334836 kubelet[2735]: E0130 13:52:01.334807 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:01.334892 kubelet[2735]: E0130 13:52:01.334837 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde"} Jan 30 13:52:01.334892 kubelet[2735]: E0130 13:52:01.334857 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e61af85-1ee2-489b-aa60-bd8bc7907e82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:01.334892 kubelet[2735]: E0130 13:52:01.334876 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e61af85-1ee2-489b-aa60-bd8bc7907e82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-58w8t" podUID="8e61af85-1ee2-489b-aa60-bd8bc7907e82" Jan 30 13:52:01.442413 containerd[1551]: time="2025-01-30T13:52:01.442376857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgrtj,Uid:ef0ae419-d122-4e1e-bebf-46a1a780d55b,Namespace:calico-system,Attempt:0,}" Jan 30 13:52:01.501987 containerd[1551]: time="2025-01-30T13:52:01.501934190Z" level=error msg="Failed to destroy network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.502374 containerd[1551]: time="2025-01-30T13:52:01.502343390Z" level=error msg="encountered an error cleaning up failed sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.502419 containerd[1551]: time="2025-01-30T13:52:01.502393817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgrtj,Uid:ef0ae419-d122-4e1e-bebf-46a1a780d55b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.502658 kubelet[2735]: E0130 13:52:01.502599 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.502658 kubelet[2735]: E0130 13:52:01.502665 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgrtj" Jan 30 13:52:01.502820 kubelet[2735]: E0130 13:52:01.502685 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgrtj" Jan 30 13:52:01.502820 kubelet[2735]: E0130 13:52:01.502728 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zgrtj_calico-system(ef0ae419-d122-4e1e-bebf-46a1a780d55b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zgrtj_calico-system(ef0ae419-d122-4e1e-bebf-46a1a780d55b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zgrtj" podUID="ef0ae419-d122-4e1e-bebf-46a1a780d55b" Jan 30 13:52:01.919998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde-shm.mount: Deactivated successfully. Jan 30 13:52:01.920261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e-shm.mount: Deactivated successfully. Jan 30 13:52:01.920451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a-shm.mount: Deactivated successfully. Jan 30 13:52:01.920638 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69-shm.mount: Deactivated successfully. Jan 30 13:52:01.920816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f-shm.mount: Deactivated successfully. Jan 30 13:52:02.049314 systemd-resolved[1463]: Under memory pressure, flushing caches. Jan 30 13:52:02.049334 systemd-resolved[1463]: Flushed all caches. Jan 30 13:52:02.051185 systemd-journald[1159]: Under memory pressure, flushing caches. Jan 30 13:52:02.244076 kubelet[2735]: I0130 13:52:02.244055 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:02.244687 containerd[1551]: time="2025-01-30T13:52:02.244641107Z" level=info msg="StopPodSandbox for \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\"" Jan 30 13:52:02.245148 containerd[1551]: time="2025-01-30T13:52:02.244788832Z" level=info msg="Ensure that sandbox 56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449 in task-service has been cleanup successfully" Jan 30 13:52:02.271611 containerd[1551]: time="2025-01-30T13:52:02.271562436Z" level=error msg="StopPodSandbox for \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\" failed" error="failed to destroy network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.271896 kubelet[2735]: E0130 13:52:02.271849 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:02.271943 kubelet[2735]: E0130 13:52:02.271908 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449"} Jan 30 13:52:02.271971 kubelet[2735]: E0130 13:52:02.271952 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef0ae419-d122-4e1e-bebf-46a1a780d55b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:02.272051 kubelet[2735]: E0130 13:52:02.271977 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef0ae419-d122-4e1e-bebf-46a1a780d55b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zgrtj" podUID="ef0ae419-d122-4e1e-bebf-46a1a780d55b" Jan 30 13:52:04.451028 systemd[1]: Started sshd@8-10.0.0.158:22-10.0.0.1:36094.service - OpenSSH per-connection server daemon (10.0.0.1:36094). Jan 30 13:52:04.483522 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 36094 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:04.485099 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:04.491740 systemd-logind[1529]: New session 9 of user core. Jan 30 13:52:04.498506 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:52:04.625447 sshd[3855]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:04.630291 systemd[1]: sshd@8-10.0.0.158:22-10.0.0.1:36094.service: Deactivated successfully. Jan 30 13:52:04.633099 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:52:04.633811 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:52:04.634920 systemd-logind[1529]: Removed session 9. Jan 30 13:52:05.930761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471769425.mount: Deactivated successfully. Jan 30 13:52:07.786373 containerd[1551]: time="2025-01-30T13:52:07.786306985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:07.819702 containerd[1551]: time="2025-01-30T13:52:07.819632922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:52:07.864314 containerd[1551]: time="2025-01-30T13:52:07.864272207Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:07.867060 containerd[1551]: time="2025-01-30T13:52:07.867022129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:07.867652 containerd[1551]: time="2025-01-30T13:52:07.867610559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.630070989s" Jan 30 13:52:07.867652 containerd[1551]: time="2025-01-30T13:52:07.867646939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:52:07.882636 containerd[1551]: time="2025-01-30T13:52:07.882587895Z" level=info msg="CreateContainer within sandbox \"872eb4fa7d64a824ff6b9c5e4f6f9e67813f3fd414d3d54ee6be2570ba774845\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:52:07.902009 containerd[1551]: time="2025-01-30T13:52:07.901954019Z" level=info msg="CreateContainer within sandbox \"872eb4fa7d64a824ff6b9c5e4f6f9e67813f3fd414d3d54ee6be2570ba774845\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fed163b1d5c5b6ba76c6605986da6d45d95ca7f7c2249dd082b304d2469aa62e\"" Jan 30 13:52:07.902752 containerd[1551]: time="2025-01-30T13:52:07.902727796Z" level=info msg="StartContainer for \"fed163b1d5c5b6ba76c6605986da6d45d95ca7f7c2249dd082b304d2469aa62e\"" Jan 30 13:52:08.025848 containerd[1551]: time="2025-01-30T13:52:08.025783417Z" level=info msg="StartContainer for \"fed163b1d5c5b6ba76c6605986da6d45d95ca7f7c2249dd082b304d2469aa62e\" returns successfully" Jan 30 13:52:08.060758 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:52:08.060905 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:52:08.256785 kubelet[2735]: E0130 13:52:08.256701 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:08.273174 kubelet[2735]: I0130 13:52:08.273093 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bb7gp" podStartSLOduration=1.059785608 podStartE2EDuration="18.273077014s" podCreationTimestamp="2025-01-30 13:51:50 +0000 UTC" firstStartedPulling="2025-01-30 13:51:50.661280774 +0000 UTC m=+22.297905804" lastFinishedPulling="2025-01-30 13:52:07.87457218 +0000 UTC m=+39.511197210" observedRunningTime="2025-01-30 13:52:08.272899503 +0000 UTC m=+39.909524553" watchObservedRunningTime="2025-01-30 13:52:08.273077014 +0000 UTC m=+39.909702044" Jan 30 13:52:09.635522 systemd[1]: Started sshd@9-10.0.0.158:22-10.0.0.1:36100.service - OpenSSH per-connection server daemon (10.0.0.1:36100). Jan 30 13:52:09.676096 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 36100 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:09.677534 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:09.685598 systemd-logind[1529]: New session 10 of user core. Jan 30 13:52:09.692497 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:52:09.700187 kernel: bpftool[4073]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:52:10.426884 systemd-networkd[1246]: vxlan.calico: Link UP Jan 30 13:52:10.426908 systemd-networkd[1246]: vxlan.calico: Gained carrier Jan 30 13:52:10.442938 sshd[4040]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:10.447321 systemd[1]: sshd@9-10.0.0.158:22-10.0.0.1:36100.service: Deactivated successfully. Jan 30 13:52:10.451911 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:52:10.452475 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:52:10.454533 systemd-logind[1529]: Removed session 10. Jan 30 13:52:12.097314 systemd-networkd[1246]: vxlan.calico: Gained IPv6LL Jan 30 13:52:12.440602 containerd[1551]: time="2025-01-30T13:52:12.440392366Z" level=info msg="StopPodSandbox for \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\"" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.649 [INFO][4178] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.653 [INFO][4178] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" iface="eth0" netns="/var/run/netns/cni-904c2b4b-ab61-def7-f2d7-184520fffe17" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.653 [INFO][4178] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" iface="eth0" netns="/var/run/netns/cni-904c2b4b-ab61-def7-f2d7-184520fffe17" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.654 [INFO][4178] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" iface="eth0" netns="/var/run/netns/cni-904c2b4b-ab61-def7-f2d7-184520fffe17" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.655 [INFO][4178] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.655 [INFO][4178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.711 [INFO][4185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.712 [INFO][4185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.712 [INFO][4185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.718 [WARNING][4185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.718 [INFO][4185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.719 [INFO][4185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:12.724791 containerd[1551]: 2025-01-30 13:52:12.722 [INFO][4178] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:12.725466 containerd[1551]: time="2025-01-30T13:52:12.724951397Z" level=info msg="TearDown network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\" successfully" Jan 30 13:52:12.725466 containerd[1551]: time="2025-01-30T13:52:12.724984209Z" level=info msg="StopPodSandbox for \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\" returns successfully" Jan 30 13:52:12.726073 containerd[1551]: time="2025-01-30T13:52:12.725835369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7c58cdc5-s4npw,Uid:a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0,Namespace:calico-system,Attempt:1,}" Jan 30 13:52:12.728491 systemd[1]: run-netns-cni\x2d904c2b4b\x2dab61\x2ddef7\x2df2d7\x2d184520fffe17.mount: Deactivated successfully. Jan 30 13:52:12.951961 systemd-networkd[1246]: cali42c7947cdb6: Link UP Jan 30 13:52:12.952470 systemd-networkd[1246]: cali42c7947cdb6: Gained carrier Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.860 [INFO][4193] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0 calico-kube-controllers-5c7c58cdc5- calico-system a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0 890 0 2025-01-30 13:51:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c7c58cdc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c7c58cdc5-s4npw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali42c7947cdb6 [] []}} ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Namespace="calico-system" Pod="calico-kube-controllers-5c7c58cdc5-s4npw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.860 [INFO][4193] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Namespace="calico-system" Pod="calico-kube-controllers-5c7c58cdc5-s4npw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.887 [INFO][4206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" HandleID="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.894 [INFO][4206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" HandleID="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000389cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c7c58cdc5-s4npw", "timestamp":"2025-01-30 13:52:12.88764672 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.895 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.895 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.895 [INFO][4206] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.896 [INFO][4206] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.900 [INFO][4206] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.903 [INFO][4206] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.904 [INFO][4206] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.905 [INFO][4206] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.905 [INFO][4206] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.906 [INFO][4206] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94 Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.913 [INFO][4206] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.946 [INFO][4206] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.946 [INFO][4206] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" host="localhost" Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.946 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:12.971721 containerd[1551]: 2025-01-30 13:52:12.946 [INFO][4206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" HandleID="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.972477 containerd[1551]: 2025-01-30 13:52:12.949 [INFO][4193] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Namespace="calico-system" Pod="calico-kube-controllers-5c7c58cdc5-s4npw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0", GenerateName:"calico-kube-controllers-5c7c58cdc5-", Namespace:"calico-system", SelfLink:"", UID:"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7c58cdc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c7c58cdc5-s4npw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42c7947cdb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:12.972477 containerd[1551]: 2025-01-30 13:52:12.950 [INFO][4193] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Namespace="calico-system" Pod="calico-kube-controllers-5c7c58cdc5-s4npw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.972477 containerd[1551]: 2025-01-30 13:52:12.950 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42c7947cdb6 ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Namespace="calico-system" Pod="calico-kube-controllers-5c7c58cdc5-s4npw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.972477 containerd[1551]: 2025-01-30 13:52:12.952 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Namespace="calico-system" Pod="calico-kube-controllers-5c7c58cdc5-s4npw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:12.972477 containerd[1551]: 2025-01-30 13:52:12.952 [INFO][4193] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Namespace="calico-system" Pod="calico-kube-controllers-5c7c58cdc5-s4npw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0", GenerateName:"calico-kube-controllers-5c7c58cdc5-", Namespace:"calico-system", SelfLink:"", UID:"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7c58cdc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94", Pod:"calico-kube-controllers-5c7c58cdc5-s4npw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42c7947cdb6", MAC:"ce:30:6c:40:a1:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:12.972477 containerd[1551]: 2025-01-30 13:52:12.968 [INFO][4193] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Namespace="calico-system" Pod="calico-kube-controllers-5c7c58cdc5-s4npw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:13.012587 containerd[1551]: time="2025-01-30T13:52:13.012353239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:13.012587 containerd[1551]: time="2025-01-30T13:52:13.012410308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:13.012587 containerd[1551]: time="2025-01-30T13:52:13.012422412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:13.012587 containerd[1551]: time="2025-01-30T13:52:13.012526321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:13.037661 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:13.064237 containerd[1551]: time="2025-01-30T13:52:13.064186566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7c58cdc5-s4npw,Uid:a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94\"" Jan 30 13:52:13.065518 containerd[1551]: time="2025-01-30T13:52:13.065495181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:52:13.440482 containerd[1551]: time="2025-01-30T13:52:13.440370004Z" level=info msg="StopPodSandbox for \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\"" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.476 [INFO][4286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.476 [INFO][4286] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" iface="eth0" netns="/var/run/netns/cni-2e375af9-6b6d-b688-ab13-ffa6c3864042" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.477 [INFO][4286] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" iface="eth0" netns="/var/run/netns/cni-2e375af9-6b6d-b688-ab13-ffa6c3864042" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.477 [INFO][4286] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" iface="eth0" netns="/var/run/netns/cni-2e375af9-6b6d-b688-ab13-ffa6c3864042" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.477 [INFO][4286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.477 [INFO][4286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.498 [INFO][4293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.498 [INFO][4293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.498 [INFO][4293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.502 [WARNING][4293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.502 [INFO][4293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.503 [INFO][4293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:13.507648 containerd[1551]: 2025-01-30 13:52:13.505 [INFO][4286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:13.508360 containerd[1551]: time="2025-01-30T13:52:13.507779973Z" level=info msg="TearDown network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\" successfully" Jan 30 13:52:13.508360 containerd[1551]: time="2025-01-30T13:52:13.507804470Z" level=info msg="StopPodSandbox for \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\" returns successfully" Jan 30 13:52:13.508498 containerd[1551]: time="2025-01-30T13:52:13.508472038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58896df755-xxq4k,Uid:78380a0f-05ec-4046-8a96-ad8ade5588e4,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:52:13.607769 systemd-networkd[1246]: cali41bf2ac1572: Link UP Jan 30 13:52:13.608345 systemd-networkd[1246]: cali41bf2ac1572: Gained carrier Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.549 [INFO][4301] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0 calico-apiserver-58896df755- calico-apiserver 78380a0f-05ec-4046-8a96-ad8ade5588e4 898 0 2025-01-30 13:51:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58896df755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58896df755-xxq4k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali41bf2ac1572 [] []}} ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-xxq4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--xxq4k-" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.549 [INFO][4301] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-xxq4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.575 [INFO][4315] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" HandleID="k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.581 [INFO][4315] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" HandleID="k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003acf30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58896df755-xxq4k", "timestamp":"2025-01-30 13:52:13.575001913 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.581 [INFO][4315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.581 [INFO][4315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.581 [INFO][4315] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.583 [INFO][4315] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.586 [INFO][4315] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.589 [INFO][4315] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.591 [INFO][4315] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.593 [INFO][4315] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.593 [INFO][4315] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.594 [INFO][4315] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3 Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.597 [INFO][4315] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.603 [INFO][4315] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.603 [INFO][4315] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" host="localhost" Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.603 [INFO][4315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:13.621096 containerd[1551]: 2025-01-30 13:52:13.603 [INFO][4315] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" HandleID="k8s-pod-network.556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.621854 containerd[1551]: 2025-01-30 13:52:13.606 [INFO][4301] cni-plugin/k8s.go 386: Populated endpoint ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-xxq4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0", GenerateName:"calico-apiserver-58896df755-", Namespace:"calico-apiserver", SelfLink:"", UID:"78380a0f-05ec-4046-8a96-ad8ade5588e4", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58896df755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58896df755-xxq4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41bf2ac1572", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:13.621854 containerd[1551]: 2025-01-30 13:52:13.606 [INFO][4301] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-xxq4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.621854 containerd[1551]: 2025-01-30 13:52:13.606 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41bf2ac1572 ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-xxq4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.621854 containerd[1551]: 2025-01-30 13:52:13.608 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-xxq4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.621854 containerd[1551]: 2025-01-30 13:52:13.609 [INFO][4301] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-xxq4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0", GenerateName:"calico-apiserver-58896df755-", Namespace:"calico-apiserver", SelfLink:"", UID:"78380a0f-05ec-4046-8a96-ad8ade5588e4", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58896df755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3", Pod:"calico-apiserver-58896df755-xxq4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41bf2ac1572", MAC:"12:27:a3:0f:fe:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:13.621854 containerd[1551]: 2025-01-30 13:52:13.618 [INFO][4301] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-xxq4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:13.641962 containerd[1551]: time="2025-01-30T13:52:13.641872451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:13.641962 containerd[1551]: time="2025-01-30T13:52:13.641922125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:13.641962 containerd[1551]: time="2025-01-30T13:52:13.641936873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:13.642134 containerd[1551]: time="2025-01-30T13:52:13.642041123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:13.667044 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:13.694410 containerd[1551]: time="2025-01-30T13:52:13.693579004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58896df755-xxq4k,Uid:78380a0f-05ec-4046-8a96-ad8ade5588e4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3\"" Jan 30 13:52:13.729987 systemd[1]: run-containerd-runc-k8s.io-cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94-runc.7AqXbi.mount: Deactivated successfully. Jan 30 13:52:13.730199 systemd[1]: run-netns-cni\x2d2e375af9\x2d6b6d\x2db688\x2dab13\x2dffa6c3864042.mount: Deactivated successfully. Jan 30 13:52:14.401318 systemd-networkd[1246]: cali42c7947cdb6: Gained IPv6LL Jan 30 13:52:14.441313 containerd[1551]: time="2025-01-30T13:52:14.441178197Z" level=info msg="StopPodSandbox for \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\"" Jan 30 13:52:14.441313 containerd[1551]: time="2025-01-30T13:52:14.441216239Z" level=info msg="StopPodSandbox for \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\"" Jan 30 13:52:14.441772 containerd[1551]: time="2025-01-30T13:52:14.441186152Z" level=info msg="StopPodSandbox for \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\"" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.503 [INFO][4424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.505 [INFO][4424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" iface="eth0" netns="/var/run/netns/cni-f855b4a4-ef31-b1c0-3c4c-86e829acbd0d" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.506 [INFO][4424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" iface="eth0" netns="/var/run/netns/cni-f855b4a4-ef31-b1c0-3c4c-86e829acbd0d" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.507 [INFO][4424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" iface="eth0" netns="/var/run/netns/cni-f855b4a4-ef31-b1c0-3c4c-86e829acbd0d" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.507 [INFO][4424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.507 [INFO][4424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.553 [INFO][4447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.554 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.554 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.559 [WARNING][4447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.559 [INFO][4447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.561 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.567465 containerd[1551]: 2025-01-30 13:52:14.563 [INFO][4424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:14.571489 systemd[1]: run-netns-cni\x2df855b4a4\x2def31\x2db1c0\x2d3c4c\x2d86e829acbd0d.mount: Deactivated successfully. Jan 30 13:52:14.572957 containerd[1551]: time="2025-01-30T13:52:14.572130147Z" level=info msg="TearDown network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\" successfully" Jan 30 13:52:14.572957 containerd[1551]: time="2025-01-30T13:52:14.572188349Z" level=info msg="StopPodSandbox for \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\" returns successfully" Jan 30 13:52:14.573892 containerd[1551]: time="2025-01-30T13:52:14.573854236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58896df755-7x46h,Uid:d6de7ab5-0ba7-46df-aa33-d3bbedd226fa,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.520 [INFO][4423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.520 [INFO][4423] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" iface="eth0" netns="/var/run/netns/cni-357328d9-e4dd-8b66-d3d0-dec9f11e557c" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.520 [INFO][4423] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" iface="eth0" netns="/var/run/netns/cni-357328d9-e4dd-8b66-d3d0-dec9f11e557c" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.520 [INFO][4423] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" iface="eth0" netns="/var/run/netns/cni-357328d9-e4dd-8b66-d3d0-dec9f11e557c" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.520 [INFO][4423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.520 [INFO][4423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.554 [INFO][4459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.554 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.561 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.566 [WARNING][4459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.566 [INFO][4459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.567 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.576589 containerd[1551]: 2025-01-30 13:52:14.573 [INFO][4423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:14.577371 containerd[1551]: time="2025-01-30T13:52:14.577250062Z" level=info msg="TearDown network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\" successfully" Jan 30 13:52:14.577371 containerd[1551]: time="2025-01-30T13:52:14.577281061Z" level=info msg="StopPodSandbox for \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\" returns successfully" Jan 30 13:52:14.577829 kubelet[2735]: E0130 13:52:14.577805 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:14.579313 containerd[1551]: time="2025-01-30T13:52:14.578749410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-br7dh,Uid:decac6b5-b980-4127-9316-ce25e5c0883a,Namespace:kube-system,Attempt:1,}" Jan 30 13:52:14.579666 systemd[1]: run-netns-cni\x2d357328d9\x2de4dd\x2d8b66\x2dd3d0\x2ddec9f11e557c.mount: Deactivated successfully. Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.508 [INFO][4425] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.508 [INFO][4425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" iface="eth0" netns="/var/run/netns/cni-c7b006ce-2e0e-4af4-710e-47a5c0d95b4c" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.508 [INFO][4425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" iface="eth0" netns="/var/run/netns/cni-c7b006ce-2e0e-4af4-710e-47a5c0d95b4c" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.508 [INFO][4425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" iface="eth0" netns="/var/run/netns/cni-c7b006ce-2e0e-4af4-710e-47a5c0d95b4c" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.508 [INFO][4425] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.508 [INFO][4425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.564 [INFO][4448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.564 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.567 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.572 [WARNING][4448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.572 [INFO][4448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.575 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.582228 containerd[1551]: 2025-01-30 13:52:14.578 [INFO][4425] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:14.582573 containerd[1551]: time="2025-01-30T13:52:14.582396046Z" level=info msg="TearDown network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\" successfully" Jan 30 13:52:14.582573 containerd[1551]: time="2025-01-30T13:52:14.582418830Z" level=info msg="StopPodSandbox for \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\" returns successfully" Jan 30 13:52:14.583102 containerd[1551]: time="2025-01-30T13:52:14.582928065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgrtj,Uid:ef0ae419-d122-4e1e-bebf-46a1a780d55b,Namespace:calico-system,Attempt:1,}" Jan 30 13:52:14.585480 systemd[1]: run-netns-cni\x2dc7b006ce\x2d2e0e\x2d4af4\x2d710e\x2d47a5c0d95b4c.mount: Deactivated successfully. Jan 30 13:52:14.722375 systemd-networkd[1246]: cali41bf2ac1572: Gained IPv6LL Jan 30 13:52:14.853411 systemd-networkd[1246]: calif82d72ed475: Link UP Jan 30 13:52:14.853602 systemd-networkd[1246]: calif82d72ed475: Gained carrier Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.772 [INFO][4477] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58896df755--7x46h-eth0 calico-apiserver-58896df755- calico-apiserver d6de7ab5-0ba7-46df-aa33-d3bbedd226fa 909 0 2025-01-30 13:51:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58896df755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58896df755-7x46h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif82d72ed475 [] []}} ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-7x46h" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--7x46h-" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.772 [INFO][4477] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-7x46h" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.808 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" HandleID="k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.815 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" HandleID="k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5e80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58896df755-7x46h", "timestamp":"2025-01-30 13:52:14.808186019 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.817 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.817 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.817 [INFO][4529] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.819 [INFO][4529] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.824 [INFO][4529] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.830 [INFO][4529] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.832 [INFO][4529] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.834 [INFO][4529] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.834 [INFO][4529] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.836 [INFO][4529] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42 Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.839 [INFO][4529] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.845 [INFO][4529] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.845 [INFO][4529] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" host="localhost" Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.845 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.865479 containerd[1551]: 2025-01-30 13:52:14.845 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" HandleID="k8s-pod-network.1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.866058 containerd[1551]: 2025-01-30 13:52:14.850 [INFO][4477] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-7x46h" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58896df755--7x46h-eth0", GenerateName:"calico-apiserver-58896df755-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6de7ab5-0ba7-46df-aa33-d3bbedd226fa", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58896df755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58896df755-7x46h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif82d72ed475", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.866058 containerd[1551]: 2025-01-30 13:52:14.850 [INFO][4477] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-7x46h" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.866058 containerd[1551]: 2025-01-30 13:52:14.850 [INFO][4477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif82d72ed475 ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-7x46h" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.866058 containerd[1551]: 2025-01-30 13:52:14.852 [INFO][4477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-7x46h" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.866058 containerd[1551]: 2025-01-30 13:52:14.853 [INFO][4477] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-7x46h" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58896df755--7x46h-eth0", GenerateName:"calico-apiserver-58896df755-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6de7ab5-0ba7-46df-aa33-d3bbedd226fa", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58896df755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42", Pod:"calico-apiserver-58896df755-7x46h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif82d72ed475", MAC:"ce:ca:4c:60:04:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.866058 containerd[1551]: 2025-01-30 13:52:14.862 [INFO][4477] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42" Namespace="calico-apiserver" Pod="calico-apiserver-58896df755-7x46h" WorkloadEndpoint="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:14.884757 systemd-networkd[1246]: cali7537d3f7852: Link UP Jan 30 13:52:14.886055 systemd-networkd[1246]: cali7537d3f7852: Gained carrier Jan 30 13:52:14.903087 containerd[1551]: time="2025-01-30T13:52:14.902801045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:14.903087 containerd[1551]: time="2025-01-30T13:52:14.902855729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:14.903087 containerd[1551]: time="2025-01-30T13:52:14.902869336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.903087 containerd[1551]: time="2025-01-30T13:52:14.902961542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.774 [INFO][4501] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0 coredns-7db6d8ff4d- kube-system decac6b5-b980-4127-9316-ce25e5c0883a 911 0 2025-01-30 13:51:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-br7dh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7537d3f7852 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-br7dh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--br7dh-" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.774 [INFO][4501] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-br7dh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.818 [INFO][4524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" HandleID="k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.829 [INFO][4524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" HandleID="k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f40b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-br7dh", "timestamp":"2025-01-30 13:52:14.818492095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.830 [INFO][4524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.845 [INFO][4524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.845 [INFO][4524] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.847 [INFO][4524] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.850 [INFO][4524] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.857 [INFO][4524] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.860 [INFO][4524] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.864 [INFO][4524] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.864 [INFO][4524] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.866 [INFO][4524] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.870 [INFO][4524] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.876 [INFO][4524] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.876 [INFO][4524] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" host="localhost" Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.876 [INFO][4524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.907631 containerd[1551]: 2025-01-30 13:52:14.876 [INFO][4524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" HandleID="k8s-pod-network.65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.908336 containerd[1551]: 2025-01-30 13:52:14.879 [INFO][4501] cni-plugin/k8s.go 386: Populated endpoint ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-br7dh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"decac6b5-b980-4127-9316-ce25e5c0883a", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-br7dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7537d3f7852", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.908336 containerd[1551]: 2025-01-30 13:52:14.880 [INFO][4501] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-br7dh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.908336 containerd[1551]: 2025-01-30 13:52:14.880 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7537d3f7852 ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-br7dh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.908336 containerd[1551]: 2025-01-30 13:52:14.886 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-br7dh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.908336 containerd[1551]: 2025-01-30 13:52:14.887 [INFO][4501] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-br7dh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"decac6b5-b980-4127-9316-ce25e5c0883a", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a", Pod:"coredns-7db6d8ff4d-br7dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7537d3f7852", MAC:"e2:39:01:fd:7c:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.908336 containerd[1551]: 2025-01-30 13:52:14.899 [INFO][4501] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-br7dh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:14.929264 systemd-networkd[1246]: cali1e9ad1f216e: Link UP Jan 30 13:52:14.931731 systemd-networkd[1246]: cali1e9ad1f216e: Gained carrier Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.772 [INFO][4483] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zgrtj-eth0 csi-node-driver- calico-system ef0ae419-d122-4e1e-bebf-46a1a780d55b 910 0 2025-01-30 13:51:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zgrtj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1e9ad1f216e [] []}} ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Namespace="calico-system" Pod="csi-node-driver-zgrtj" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgrtj-" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.772 [INFO][4483] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Namespace="calico-system" Pod="csi-node-driver-zgrtj" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.820 [INFO][4530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" HandleID="k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.831 [INFO][4530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" HandleID="k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zgrtj", "timestamp":"2025-01-30 13:52:14.820593576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.831 [INFO][4530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.876 [INFO][4530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.876 [INFO][4530] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.879 [INFO][4530] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.883 [INFO][4530] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.893 [INFO][4530] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.898 [INFO][4530] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.901 [INFO][4530] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.901 [INFO][4530] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.905 [INFO][4530] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.910 [INFO][4530] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.917 [INFO][4530] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.917 [INFO][4530] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" host="localhost" Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.917 [INFO][4530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:14.948623 containerd[1551]: 2025-01-30 13:52:14.917 [INFO][4530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" HandleID="k8s-pod-network.20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.949360 containerd[1551]: 2025-01-30 13:52:14.922 [INFO][4483] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Namespace="calico-system" Pod="csi-node-driver-zgrtj" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgrtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zgrtj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef0ae419-d122-4e1e-bebf-46a1a780d55b", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zgrtj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e9ad1f216e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.949360 containerd[1551]: 2025-01-30 13:52:14.922 [INFO][4483] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Namespace="calico-system" Pod="csi-node-driver-zgrtj" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.949360 containerd[1551]: 2025-01-30 13:52:14.922 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e9ad1f216e ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Namespace="calico-system" Pod="csi-node-driver-zgrtj" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.949360 containerd[1551]: 2025-01-30 13:52:14.932 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Namespace="calico-system" Pod="csi-node-driver-zgrtj" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.949360 containerd[1551]: 2025-01-30 13:52:14.933 [INFO][4483] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Namespace="calico-system" Pod="csi-node-driver-zgrtj" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgrtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zgrtj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef0ae419-d122-4e1e-bebf-46a1a780d55b", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d", Pod:"csi-node-driver-zgrtj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e9ad1f216e", MAC:"4a:e8:fd:c0:42:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:14.949360 containerd[1551]: 2025-01-30 13:52:14.943 [INFO][4483] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d" Namespace="calico-system" Pod="csi-node-driver-zgrtj" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:14.952909 containerd[1551]: time="2025-01-30T13:52:14.952619100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:14.952909 containerd[1551]: time="2025-01-30T13:52:14.952696378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:14.952909 containerd[1551]: time="2025-01-30T13:52:14.952710816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.952909 containerd[1551]: time="2025-01-30T13:52:14.952813032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.953141 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:14.984373 containerd[1551]: time="2025-01-30T13:52:14.982430812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:14.984642 containerd[1551]: time="2025-01-30T13:52:14.984541160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:14.985957 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:14.986423 containerd[1551]: time="2025-01-30T13:52:14.986393584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:14.989318 containerd[1551]: time="2025-01-30T13:52:14.988862597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:15.011477 containerd[1551]: time="2025-01-30T13:52:15.011436864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58896df755-7x46h,Uid:d6de7ab5-0ba7-46df-aa33-d3bbedd226fa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42\"" Jan 30 13:52:15.018534 containerd[1551]: time="2025-01-30T13:52:15.018391854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-br7dh,Uid:decac6b5-b980-4127-9316-ce25e5c0883a,Namespace:kube-system,Attempt:1,} returns sandbox id \"65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a\"" Jan 30 13:52:15.019206 kubelet[2735]: E0130 13:52:15.019129 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:15.021720 containerd[1551]: time="2025-01-30T13:52:15.021684150Z" level=info msg="CreateContainer within sandbox \"65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:52:15.025632 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:15.042512 containerd[1551]: time="2025-01-30T13:52:15.042464988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgrtj,Uid:ef0ae419-d122-4e1e-bebf-46a1a780d55b,Namespace:calico-system,Attempt:1,} returns sandbox id \"20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d\"" Jan 30 13:52:15.049193 containerd[1551]: time="2025-01-30T13:52:15.049146725Z" level=info msg="CreateContainer within sandbox \"65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90f2d3eeab078bcc1bbab1d7efa962b530a0ccf90f7a16ff59067d1ed4e7559d\"" Jan 30 13:52:15.049772 containerd[1551]: time="2025-01-30T13:52:15.049531381Z" level=info msg="StartContainer for \"90f2d3eeab078bcc1bbab1d7efa962b530a0ccf90f7a16ff59067d1ed4e7559d\"" Jan 30 13:52:15.083522 kubelet[2735]: I0130 13:52:15.083488 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:15.084199 kubelet[2735]: E0130 13:52:15.084182 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:15.106208 containerd[1551]: time="2025-01-30T13:52:15.106139970Z" level=info msg="StartContainer for \"90f2d3eeab078bcc1bbab1d7efa962b530a0ccf90f7a16ff59067d1ed4e7559d\" returns successfully" Jan 30 13:52:15.279698 kubelet[2735]: E0130 13:52:15.279402 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:15.279940 kubelet[2735]: E0130 13:52:15.279856 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:15.339612 kubelet[2735]: I0130 13:52:15.339545 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-br7dh" podStartSLOduration=31.339525069 podStartE2EDuration="31.339525069s" podCreationTimestamp="2025-01-30 13:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:15.339460496 +0000 UTC m=+46.976085536" watchObservedRunningTime="2025-01-30 13:52:15.339525069 +0000 UTC m=+46.976150099" Jan 30 13:52:15.664486 systemd[1]: Started sshd@10-10.0.0.158:22-10.0.0.1:52094.service - OpenSSH per-connection server daemon (10.0.0.1:52094). Jan 30 13:52:15.988148 sshd[4798]: Accepted publickey for core from 10.0.0.1 port 52094 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:15.989873 sshd[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:15.993939 systemd-logind[1529]: New session 11 of user core. Jan 30 13:52:16.001413 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:52:16.145674 sshd[4798]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:16.151376 systemd[1]: Started sshd@11-10.0.0.158:22-10.0.0.1:52098.service - OpenSSH per-connection server daemon (10.0.0.1:52098). Jan 30 13:52:16.151961 systemd[1]: sshd@10-10.0.0.158:22-10.0.0.1:52094.service: Deactivated successfully. Jan 30 13:52:16.154874 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:52:16.155506 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:52:16.156384 systemd-logind[1529]: Removed session 11. Jan 30 13:52:16.179570 sshd[4822]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:16.181120 sshd[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:16.184884 systemd-logind[1529]: New session 12 of user core. Jan 30 13:52:16.198462 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:52:16.281194 kubelet[2735]: E0130 13:52:16.281069 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:16.440638 containerd[1551]: time="2025-01-30T13:52:16.440558376Z" level=info msg="StopPodSandbox for \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\"" Jan 30 13:52:16.480716 sshd[4822]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:16.487357 systemd[1]: Started sshd@12-10.0.0.158:22-10.0.0.1:52106.service - OpenSSH per-connection server daemon (10.0.0.1:52106). Jan 30 13:52:16.488037 systemd[1]: sshd@11-10.0.0.158:22-10.0.0.1:52098.service: Deactivated successfully. Jan 30 13:52:16.490880 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:52:16.491075 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:52:16.492509 systemd-logind[1529]: Removed session 12. Jan 30 13:52:16.518312 sshd[4860]: Accepted publickey for core from 10.0.0.1 port 52106 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:16.519947 sshd[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:16.523848 systemd-logind[1529]: New session 13 of user core. Jan 30 13:52:16.531619 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:52:16.673052 containerd[1551]: time="2025-01-30T13:52:16.672990995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:16.699000 sshd[4860]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:16.703291 systemd[1]: sshd@12-10.0.0.158:22-10.0.0.1:52106.service: Deactivated successfully. Jan 30 13:52:16.706315 systemd-networkd[1246]: calif82d72ed475: Gained IPv6LL Jan 30 13:52:16.707600 systemd-networkd[1246]: cali1e9ad1f216e: Gained IPv6LL Jan 30 13:52:16.708430 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:52:16.708985 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:52:16.710631 systemd-logind[1529]: Removed session 13. Jan 30 13:52:16.715130 containerd[1551]: time="2025-01-30T13:52:16.715075528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:52:16.743789 containerd[1551]: time="2025-01-30T13:52:16.743760368Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.693 [INFO][4852] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.693 [INFO][4852] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" iface="eth0" netns="/var/run/netns/cni-caa20f96-12ec-4ed5-98d9-540be1d9ab71" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.694 [INFO][4852] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" iface="eth0" netns="/var/run/netns/cni-caa20f96-12ec-4ed5-98d9-540be1d9ab71" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.694 [INFO][4852] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" iface="eth0" netns="/var/run/netns/cni-caa20f96-12ec-4ed5-98d9-540be1d9ab71" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.694 [INFO][4852] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.694 [INFO][4852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.717 [INFO][4876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.717 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.717 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.750 [WARNING][4876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.750 [INFO][4876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.752 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:16.757877 containerd[1551]: 2025-01-30 13:52:16.754 [INFO][4852] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:16.758282 containerd[1551]: time="2025-01-30T13:52:16.758045474Z" level=info msg="TearDown network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\" successfully" Jan 30 13:52:16.758282 containerd[1551]: time="2025-01-30T13:52:16.758069820Z" level=info msg="StopPodSandbox for \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\" returns successfully" Jan 30 13:52:16.759031 kubelet[2735]: E0130 13:52:16.759007 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:16.759556 containerd[1551]: time="2025-01-30T13:52:16.759506397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-58w8t,Uid:8e61af85-1ee2-489b-aa60-bd8bc7907e82,Namespace:kube-system,Attempt:1,}" Jan 30 13:52:16.760996 systemd[1]: run-netns-cni\x2dcaa20f96\x2d12ec\x2d4ed5\x2d98d9\x2d540be1d9ab71.mount: Deactivated successfully. Jan 30 13:52:16.766634 containerd[1551]: time="2025-01-30T13:52:16.766560601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:16.767391 containerd[1551]: time="2025-01-30T13:52:16.767359548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.701835241s" Jan 30 13:52:16.767391 containerd[1551]: time="2025-01-30T13:52:16.767388092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:52:16.768288 containerd[1551]: time="2025-01-30T13:52:16.768254788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:52:16.769283 systemd-networkd[1246]: cali7537d3f7852: Gained IPv6LL Jan 30 13:52:16.774551 containerd[1551]: time="2025-01-30T13:52:16.774519323Z" level=info msg="CreateContainer within sandbox \"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:52:17.282601 kubelet[2735]: E0130 13:52:17.282573 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:17.767991 systemd-networkd[1246]: calia119b690b57: Link UP Jan 30 13:52:17.768709 systemd-networkd[1246]: calia119b690b57: Gained carrier Jan 30 13:52:17.796631 containerd[1551]: time="2025-01-30T13:52:17.796521385Z" level=info msg="CreateContainer within sandbox \"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\"" Jan 30 13:52:17.798033 containerd[1551]: time="2025-01-30T13:52:17.797991093Z" level=info msg="StartContainer for \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\"" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.673 [INFO][4889] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0 coredns-7db6d8ff4d- kube-system 8e61af85-1ee2-489b-aa60-bd8bc7907e82 950 0 2025-01-30 13:51:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-58w8t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia119b690b57 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Namespace="kube-system" Pod="coredns-7db6d8ff4d-58w8t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--58w8t-" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.673 [INFO][4889] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Namespace="kube-system" Pod="coredns-7db6d8ff4d-58w8t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.697 [INFO][4902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" HandleID="k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.704 [INFO][4902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" HandleID="k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-58w8t", "timestamp":"2025-01-30 13:52:17.697395711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.704 [INFO][4902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.704 [INFO][4902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.704 [INFO][4902] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.705 [INFO][4902] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.710 [INFO][4902] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.713 [INFO][4902] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.714 [INFO][4902] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.717 [INFO][4902] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.717 [INFO][4902] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.718 [INFO][4902] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003 Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.736 [INFO][4902] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.763 [INFO][4902] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.763 [INFO][4902] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" host="localhost" Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.763 [INFO][4902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:17.801318 containerd[1551]: 2025-01-30 13:52:17.763 [INFO][4902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" HandleID="k8s-pod-network.ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:17.801815 containerd[1551]: 2025-01-30 13:52:17.766 [INFO][4889] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Namespace="kube-system" Pod="coredns-7db6d8ff4d-58w8t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8e61af85-1ee2-489b-aa60-bd8bc7907e82", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-58w8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia119b690b57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:17.801815 containerd[1551]: 2025-01-30 13:52:17.766 [INFO][4889] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Namespace="kube-system" Pod="coredns-7db6d8ff4d-58w8t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:17.801815 containerd[1551]: 2025-01-30 13:52:17.766 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia119b690b57 ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Namespace="kube-system" Pod="coredns-7db6d8ff4d-58w8t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:17.801815 containerd[1551]: 2025-01-30 13:52:17.768 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Namespace="kube-system" Pod="coredns-7db6d8ff4d-58w8t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:17.801815 containerd[1551]: 2025-01-30 13:52:17.768 [INFO][4889] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Namespace="kube-system" Pod="coredns-7db6d8ff4d-58w8t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8e61af85-1ee2-489b-aa60-bd8bc7907e82", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003", Pod:"coredns-7db6d8ff4d-58w8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia119b690b57", MAC:"e6:41:29:e9:be:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:17.801815 containerd[1551]: 2025-01-30 13:52:17.797 [INFO][4889] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003" Namespace="kube-system" Pod="coredns-7db6d8ff4d-58w8t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:17.851122 containerd[1551]: time="2025-01-30T13:52:17.850825125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:17.851122 containerd[1551]: time="2025-01-30T13:52:17.851012905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:17.851869 containerd[1551]: time="2025-01-30T13:52:17.851720366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:17.851869 containerd[1551]: time="2025-01-30T13:52:17.851825417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:17.878608 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:18.001850 containerd[1551]: time="2025-01-30T13:52:18.001787886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-58w8t,Uid:8e61af85-1ee2-489b-aa60-bd8bc7907e82,Namespace:kube-system,Attempt:1,} returns sandbox id \"ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003\"" Jan 30 13:52:18.002008 containerd[1551]: time="2025-01-30T13:52:18.001806481Z" level=info msg="StartContainer for \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\" returns successfully" Jan 30 13:52:18.002725 kubelet[2735]: E0130 13:52:18.002690 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:18.004519 containerd[1551]: time="2025-01-30T13:52:18.004491901Z" level=info msg="CreateContainer within sandbox \"ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:52:18.243094 containerd[1551]: time="2025-01-30T13:52:18.243054520Z" level=info msg="CreateContainer within sandbox \"ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d37b9f146cc5ee6865cb19e411a7ec76b2191451abd378322e446fe214dad23c\"" Jan 30 13:52:18.243774 containerd[1551]: time="2025-01-30T13:52:18.243537233Z" level=info msg="StartContainer for \"d37b9f146cc5ee6865cb19e411a7ec76b2191451abd378322e446fe214dad23c\"" Jan 30 13:52:18.382272 containerd[1551]: time="2025-01-30T13:52:18.382223853Z" level=info msg="StartContainer for \"d37b9f146cc5ee6865cb19e411a7ec76b2191451abd378322e446fe214dad23c\" returns successfully" Jan 30 13:52:18.388097 kubelet[2735]: I0130 13:52:18.388039 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c7c58cdc5-s4npw" podStartSLOduration=24.685095569 podStartE2EDuration="28.388018391s" podCreationTimestamp="2025-01-30 13:51:50 +0000 UTC" firstStartedPulling="2025-01-30 13:52:13.065118269 +0000 UTC m=+44.701743299" lastFinishedPulling="2025-01-30 13:52:16.768041091 +0000 UTC m=+48.404666121" observedRunningTime="2025-01-30 13:52:18.387129654 +0000 UTC m=+50.023754684" watchObservedRunningTime="2025-01-30 13:52:18.388018391 +0000 UTC m=+50.024643421" Jan 30 13:52:19.293278 kubelet[2735]: E0130 13:52:19.293247 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:19.375659 kubelet[2735]: I0130 13:52:19.375452 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-58w8t" podStartSLOduration=35.375434331 podStartE2EDuration="35.375434331s" podCreationTimestamp="2025-01-30 13:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:19.374555282 +0000 UTC m=+51.011180312" watchObservedRunningTime="2025-01-30 13:52:19.375434331 +0000 UTC m=+51.012059361" Jan 30 13:52:19.458727 systemd-networkd[1246]: calia119b690b57: Gained IPv6LL Jan 30 13:52:19.951419 containerd[1551]: time="2025-01-30T13:52:19.951362984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.954461 containerd[1551]: time="2025-01-30T13:52:19.954150607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:52:19.955670 containerd[1551]: time="2025-01-30T13:52:19.955642786Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.957720 containerd[1551]: time="2025-01-30T13:52:19.957677972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.958330 containerd[1551]: time="2025-01-30T13:52:19.958298326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.190006347s" Jan 30 13:52:19.958370 containerd[1551]: time="2025-01-30T13:52:19.958328875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:52:19.959115 containerd[1551]: time="2025-01-30T13:52:19.959091772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:52:19.960392 containerd[1551]: time="2025-01-30T13:52:19.960361646Z" level=info msg="CreateContainer within sandbox \"556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:52:19.971748 containerd[1551]: time="2025-01-30T13:52:19.971709534Z" level=info msg="CreateContainer within sandbox \"556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b838ee87882ff3ac604c6c77c14d8987f2bed3ed8542aa6e819dde637651ca94\"" Jan 30 13:52:19.972970 containerd[1551]: time="2025-01-30T13:52:19.972925566Z" level=info msg="StartContainer for \"b838ee87882ff3ac604c6c77c14d8987f2bed3ed8542aa6e819dde637651ca94\"" Jan 30 13:52:20.038615 containerd[1551]: time="2025-01-30T13:52:20.038576530Z" level=info msg="StartContainer for \"b838ee87882ff3ac604c6c77c14d8987f2bed3ed8542aa6e819dde637651ca94\" returns successfully" Jan 30 13:52:20.295871 kubelet[2735]: E0130 13:52:20.295843 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:20.304442 kubelet[2735]: I0130 13:52:20.304394 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58896df755-xxq4k" podStartSLOduration=24.040705513 podStartE2EDuration="30.304377287s" podCreationTimestamp="2025-01-30 13:51:50 +0000 UTC" firstStartedPulling="2025-01-30 13:52:13.6952957 +0000 UTC m=+45.331920730" lastFinishedPulling="2025-01-30 13:52:19.958967474 +0000 UTC m=+51.595592504" observedRunningTime="2025-01-30 13:52:20.303814072 +0000 UTC m=+51.940439102" watchObservedRunningTime="2025-01-30 13:52:20.304377287 +0000 UTC m=+51.941002317" Jan 30 13:52:20.379376 containerd[1551]: time="2025-01-30T13:52:20.379335871Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:20.380292 containerd[1551]: time="2025-01-30T13:52:20.380237401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:52:20.382874 containerd[1551]: time="2025-01-30T13:52:20.382710562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 423.594384ms" Jan 30 13:52:20.382874 containerd[1551]: time="2025-01-30T13:52:20.382756210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:52:20.384436 containerd[1551]: time="2025-01-30T13:52:20.384409746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:52:20.386619 containerd[1551]: time="2025-01-30T13:52:20.386591681Z" level=info msg="CreateContainer within sandbox \"1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:52:20.401797 containerd[1551]: time="2025-01-30T13:52:20.401745106Z" level=info msg="CreateContainer within sandbox \"1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c10cad0749580e64eb7a3e491698f81cc08fcdb625ef175b422182ee34304bc4\"" Jan 30 13:52:20.403239 containerd[1551]: time="2025-01-30T13:52:20.402676334Z" level=info msg="StartContainer for \"c10cad0749580e64eb7a3e491698f81cc08fcdb625ef175b422182ee34304bc4\"" Jan 30 13:52:20.488411 containerd[1551]: time="2025-01-30T13:52:20.488365109Z" level=info msg="StartContainer for \"c10cad0749580e64eb7a3e491698f81cc08fcdb625ef175b422182ee34304bc4\" returns successfully" Jan 30 13:52:21.298933 kubelet[2735]: E0130 13:52:21.298900 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:21.351195 kubelet[2735]: I0130 13:52:21.350794 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58896df755-7x46h" podStartSLOduration=25.980401712 podStartE2EDuration="31.350774744s" podCreationTimestamp="2025-01-30 13:51:50 +0000 UTC" firstStartedPulling="2025-01-30 13:52:15.013600341 +0000 UTC m=+46.650225371" lastFinishedPulling="2025-01-30 13:52:20.383973373 +0000 UTC m=+52.020598403" observedRunningTime="2025-01-30 13:52:21.349933379 +0000 UTC m=+52.986558419" watchObservedRunningTime="2025-01-30 13:52:21.350774744 +0000 UTC m=+52.987399774" Jan 30 13:52:21.712386 systemd[1]: Started sshd@13-10.0.0.158:22-10.0.0.1:56066.service - OpenSSH per-connection server daemon (10.0.0.1:56066). Jan 30 13:52:21.745443 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 56066 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:21.747091 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:21.752425 systemd-logind[1529]: New session 14 of user core. Jan 30 13:52:21.761976 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:52:21.924801 sshd[5158]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:21.928736 systemd[1]: sshd@13-10.0.0.158:22-10.0.0.1:56066.service: Deactivated successfully. Jan 30 13:52:21.931127 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:52:21.931402 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:52:21.932072 systemd-logind[1529]: Removed session 14. Jan 30 13:52:22.050108 containerd[1551]: time="2025-01-30T13:52:22.050057837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.077341 containerd[1551]: time="2025-01-30T13:52:22.077288977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:52:22.112501 containerd[1551]: time="2025-01-30T13:52:22.112470514Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.129095 containerd[1551]: time="2025-01-30T13:52:22.129034323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.129664 containerd[1551]: time="2025-01-30T13:52:22.129633677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.745195197s" Jan 30 13:52:22.129731 containerd[1551]: time="2025-01-30T13:52:22.129669405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:52:22.136596 containerd[1551]: time="2025-01-30T13:52:22.136562104Z" level=info msg="CreateContainer within sandbox \"20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:52:22.296286 containerd[1551]: time="2025-01-30T13:52:22.296235500Z" level=info msg="CreateContainer within sandbox \"20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5163fedb3d5ad60558d777172d47c5a7be2b2c119831bd1d3a0faff93e7eff76\"" Jan 30 13:52:22.296808 containerd[1551]: time="2025-01-30T13:52:22.296681481Z" level=info msg="StartContainer for \"5163fedb3d5ad60558d777172d47c5a7be2b2c119831bd1d3a0faff93e7eff76\"" Jan 30 13:52:22.303256 kubelet[2735]: E0130 13:52:22.302872 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:22.363312 containerd[1551]: time="2025-01-30T13:52:22.363259044Z" level=info msg="StartContainer for \"5163fedb3d5ad60558d777172d47c5a7be2b2c119831bd1d3a0faff93e7eff76\" returns successfully" Jan 30 13:52:22.364563 containerd[1551]: time="2025-01-30T13:52:22.364524798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:52:23.875447 containerd[1551]: time="2025-01-30T13:52:23.875395709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.876197 containerd[1551]: time="2025-01-30T13:52:23.876132753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:52:23.877267 containerd[1551]: time="2025-01-30T13:52:23.877221059Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.879609 containerd[1551]: time="2025-01-30T13:52:23.879572374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.880223 containerd[1551]: time="2025-01-30T13:52:23.880190462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.515632601s" Jan 30 13:52:23.880279 containerd[1551]: time="2025-01-30T13:52:23.880220820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:52:23.882811 containerd[1551]: time="2025-01-30T13:52:23.882746207Z" level=info msg="CreateContainer within sandbox \"20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:52:23.896945 containerd[1551]: time="2025-01-30T13:52:23.896900082Z" level=info msg="CreateContainer within sandbox \"20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bf692f30e28230317a55710d88c898b047bcb38cdbb7b5e868bb6d88d7cb8751\"" Jan 30 13:52:23.897463 containerd[1551]: time="2025-01-30T13:52:23.897437167Z" level=info msg="StartContainer for \"bf692f30e28230317a55710d88c898b047bcb38cdbb7b5e868bb6d88d7cb8751\"" Jan 30 13:52:23.952765 containerd[1551]: time="2025-01-30T13:52:23.952719654Z" level=info msg="StartContainer for \"bf692f30e28230317a55710d88c898b047bcb38cdbb7b5e868bb6d88d7cb8751\" returns successfully" Jan 30 13:52:24.226319 kubelet[2735]: I0130 13:52:24.226221 2735 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:52:24.226319 kubelet[2735]: I0130 13:52:24.226250 2735 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:52:24.432095 kubelet[2735]: I0130 13:52:24.431763 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zgrtj" podStartSLOduration=25.594581609 podStartE2EDuration="34.431744133s" podCreationTimestamp="2025-01-30 13:51:50 +0000 UTC" firstStartedPulling="2025-01-30 13:52:15.043734546 +0000 UTC m=+46.680359576" lastFinishedPulling="2025-01-30 13:52:23.88089707 +0000 UTC m=+55.517522100" observedRunningTime="2025-01-30 13:52:24.431685752 +0000 UTC m=+56.068310782" watchObservedRunningTime="2025-01-30 13:52:24.431744133 +0000 UTC m=+56.068369163" Jan 30 13:52:26.934371 systemd[1]: Started sshd@14-10.0.0.158:22-10.0.0.1:56068.service - OpenSSH per-connection server daemon (10.0.0.1:56068). Jan 30 13:52:26.966437 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 56068 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:26.968148 sshd[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:26.971800 systemd-logind[1529]: New session 15 of user core. Jan 30 13:52:26.980399 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:52:27.093504 sshd[5256]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:27.097247 systemd[1]: sshd@14-10.0.0.158:22-10.0.0.1:56068.service: Deactivated successfully. Jan 30 13:52:27.099475 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:52:27.100130 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:52:27.100903 systemd-logind[1529]: Removed session 15. Jan 30 13:52:28.436597 containerd[1551]: time="2025-01-30T13:52:28.436559006Z" level=info msg="StopPodSandbox for \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\"" Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.474 [WARNING][5288] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zgrtj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef0ae419-d122-4e1e-bebf-46a1a780d55b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d", Pod:"csi-node-driver-zgrtj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e9ad1f216e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.474 [INFO][5288] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.474 [INFO][5288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" iface="eth0" netns="" Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.474 [INFO][5288] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.474 [INFO][5288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.496 [INFO][5297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.496 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.496 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.502 [WARNING][5297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.502 [INFO][5297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.503 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:28.509680 containerd[1551]: 2025-01-30 13:52:28.506 [INFO][5288] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:28.510229 containerd[1551]: time="2025-01-30T13:52:28.509730403Z" level=info msg="TearDown network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\" successfully" Jan 30 13:52:28.510229 containerd[1551]: time="2025-01-30T13:52:28.509762895Z" level=info msg="StopPodSandbox for \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\" returns successfully" Jan 30 13:52:28.510455 containerd[1551]: time="2025-01-30T13:52:28.510431759Z" level=info msg="RemovePodSandbox for \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\"" Jan 30 13:52:28.512639 containerd[1551]: time="2025-01-30T13:52:28.512609669Z" level=info msg="Forcibly stopping sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\"" Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.547 [WARNING][5320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zgrtj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef0ae419-d122-4e1e-bebf-46a1a780d55b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20826714310f890976170ffd5c1982a77cdb857ef756bc46eb15b8e99a9e991d", Pod:"csi-node-driver-zgrtj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e9ad1f216e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.547 [INFO][5320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.547 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" iface="eth0" netns="" Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.547 [INFO][5320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.547 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.567 [INFO][5328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.567 [INFO][5328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.567 [INFO][5328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.572 [WARNING][5328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.572 [INFO][5328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" HandleID="k8s-pod-network.56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Workload="localhost-k8s-csi--node--driver--zgrtj-eth0" Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.574 [INFO][5328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:28.579876 containerd[1551]: 2025-01-30 13:52:28.577 [INFO][5320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449" Jan 30 13:52:28.580311 containerd[1551]: time="2025-01-30T13:52:28.579913927Z" level=info msg="TearDown network for sandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\" successfully" Jan 30 13:52:28.588302 containerd[1551]: time="2025-01-30T13:52:28.588248095Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:28.588372 containerd[1551]: time="2025-01-30T13:52:28.588348036Z" level=info msg="RemovePodSandbox \"56b1f46b96f1dbd9682e616b43493d10076109bd6a010b5e847ba39f638d8449\" returns successfully" Jan 30 13:52:28.588893 containerd[1551]: time="2025-01-30T13:52:28.588869600Z" level=info msg="StopPodSandbox for \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\"" Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.628 [WARNING][5351] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58896df755--7x46h-eth0", GenerateName:"calico-apiserver-58896df755-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6de7ab5-0ba7-46df-aa33-d3bbedd226fa", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58896df755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42", Pod:"calico-apiserver-58896df755-7x46h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif82d72ed475", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.628 [INFO][5351] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.628 [INFO][5351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" iface="eth0" netns="" Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.628 [INFO][5351] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.628 [INFO][5351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.652 [INFO][5358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.652 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.652 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.657 [WARNING][5358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.657 [INFO][5358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.659 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:28.664458 containerd[1551]: 2025-01-30 13:52:28.661 [INFO][5351] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:28.664930 containerd[1551]: time="2025-01-30T13:52:28.664493369Z" level=info msg="TearDown network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\" successfully" Jan 30 13:52:28.664930 containerd[1551]: time="2025-01-30T13:52:28.664518546Z" level=info msg="StopPodSandbox for \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\" returns successfully" Jan 30 13:52:28.665100 containerd[1551]: time="2025-01-30T13:52:28.665064517Z" level=info msg="RemovePodSandbox for \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\"" Jan 30 13:52:28.665181 containerd[1551]: time="2025-01-30T13:52:28.665107168Z" level=info msg="Forcibly stopping sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\"" Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.701 [WARNING][5381] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58896df755--7x46h-eth0", GenerateName:"calico-apiserver-58896df755-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6de7ab5-0ba7-46df-aa33-d3bbedd226fa", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58896df755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fd78ffb110fbd0b552c2cf6671c69ce26070e45811b0e3974e486c9da7ecc42", Pod:"calico-apiserver-58896df755-7x46h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif82d72ed475", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.701 [INFO][5381] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.701 [INFO][5381] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" iface="eth0" netns="" Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.702 [INFO][5381] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.702 [INFO][5381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.724 [INFO][5388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.725 [INFO][5388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.725 [INFO][5388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.729 [WARNING][5388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.729 [INFO][5388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" HandleID="k8s-pod-network.69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Workload="localhost-k8s-calico--apiserver--58896df755--7x46h-eth0" Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.730 [INFO][5388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:28.736104 containerd[1551]: 2025-01-30 13:52:28.733 [INFO][5381] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e" Jan 30 13:52:28.736642 containerd[1551]: time="2025-01-30T13:52:28.736153416Z" level=info msg="TearDown network for sandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\" successfully" Jan 30 13:52:28.746947 containerd[1551]: time="2025-01-30T13:52:28.746904920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:28.746947 containerd[1551]: time="2025-01-30T13:52:28.746962039Z" level=info msg="RemovePodSandbox \"69acfabe35fa53bec218b82bc69a696a97ad6776f865761e26819b82f02e002e\" returns successfully" Jan 30 13:52:28.747484 containerd[1551]: time="2025-01-30T13:52:28.747463835Z" level=info msg="StopPodSandbox for \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\"" Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.784 [WARNING][5410] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0", GenerateName:"calico-kube-controllers-5c7c58cdc5-", Namespace:"calico-system", SelfLink:"", UID:"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7c58cdc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94", Pod:"calico-kube-controllers-5c7c58cdc5-s4npw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42c7947cdb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.785 [INFO][5410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.785 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" iface="eth0" netns="" Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.785 [INFO][5410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.785 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.813 [INFO][5417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.813 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.813 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.818 [WARNING][5417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.818 [INFO][5417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.820 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:28.825790 containerd[1551]: 2025-01-30 13:52:28.823 [INFO][5410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:28.826343 containerd[1551]: time="2025-01-30T13:52:28.825838194Z" level=info msg="TearDown network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\" successfully" Jan 30 13:52:28.826343 containerd[1551]: time="2025-01-30T13:52:28.825863232Z" level=info msg="StopPodSandbox for \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\" returns successfully" Jan 30 13:52:28.826395 containerd[1551]: time="2025-01-30T13:52:28.826355500Z" level=info msg="RemovePodSandbox for \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\"" Jan 30 13:52:28.826395 containerd[1551]: time="2025-01-30T13:52:28.826383533Z" level=info msg="Forcibly stopping sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\"" Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.861 [WARNING][5439] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0", GenerateName:"calico-kube-controllers-5c7c58cdc5-", Namespace:"calico-system", SelfLink:"", UID:"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7c58cdc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94", Pod:"calico-kube-controllers-5c7c58cdc5-s4npw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42c7947cdb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.862 [INFO][5439] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.862 [INFO][5439] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" iface="eth0" netns="" Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.862 [INFO][5439] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.862 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.883 [INFO][5446] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.883 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.883 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.888 [WARNING][5446] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.888 [INFO][5446] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" HandleID="k8s-pod-network.e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.889 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:28.895149 containerd[1551]: 2025-01-30 13:52:28.892 [INFO][5439] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a" Jan 30 13:52:28.895646 containerd[1551]: time="2025-01-30T13:52:28.895215924Z" level=info msg="TearDown network for sandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\" successfully" Jan 30 13:52:28.958500 containerd[1551]: time="2025-01-30T13:52:28.958454757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:28.958589 containerd[1551]: time="2025-01-30T13:52:28.958516373Z" level=info msg="RemovePodSandbox \"e28eacb45ffd5790b284e190936eb844b21c3c66568e68258598f1bfbf5f503a\" returns successfully" Jan 30 13:52:28.959146 containerd[1551]: time="2025-01-30T13:52:28.959100076Z" level=info msg="StopPodSandbox for \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\"" Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:28.995 [WARNING][5468] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"decac6b5-b980-4127-9316-ce25e5c0883a", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a", Pod:"coredns-7db6d8ff4d-br7dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7537d3f7852", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:28.995 [INFO][5468] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:28.995 [INFO][5468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" iface="eth0" netns="" Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:28.995 [INFO][5468] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:28.995 [INFO][5468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:29.015 [INFO][5475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:29.016 [INFO][5475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:29.016 [INFO][5475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:29.021 [WARNING][5475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:29.021 [INFO][5475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:29.022 [INFO][5475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:29.028374 containerd[1551]: 2025-01-30 13:52:29.025 [INFO][5468] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:29.028374 containerd[1551]: time="2025-01-30T13:52:29.028333070Z" level=info msg="TearDown network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\" successfully" Jan 30 13:52:29.028374 containerd[1551]: time="2025-01-30T13:52:29.028367044Z" level=info msg="StopPodSandbox for \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\" returns successfully" Jan 30 13:52:29.028979 containerd[1551]: time="2025-01-30T13:52:29.028941699Z" level=info msg="RemovePodSandbox for \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\"" Jan 30 13:52:29.029049 containerd[1551]: time="2025-01-30T13:52:29.028993378Z" level=info msg="Forcibly stopping sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\"" Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.065 [WARNING][5497] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"decac6b5-b980-4127-9316-ce25e5c0883a", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65834eea8f5bfa95cd11f87a22b5e333e33b9b018dc985f3917fc730e73a564a", Pod:"coredns-7db6d8ff4d-br7dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7537d3f7852", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.066 [INFO][5497] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.066 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" iface="eth0" netns="" Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.066 [INFO][5497] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.066 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.087 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.087 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.087 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.091 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.092 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" HandleID="k8s-pod-network.ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Workload="localhost-k8s-coredns--7db6d8ff4d--br7dh-eth0" Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.093 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:29.098961 containerd[1551]: 2025-01-30 13:52:29.096 [INFO][5497] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69" Jan 30 13:52:29.099511 containerd[1551]: time="2025-01-30T13:52:29.099003818Z" level=info msg="TearDown network for sandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\" successfully" Jan 30 13:52:29.103027 containerd[1551]: time="2025-01-30T13:52:29.102996895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:29.103084 containerd[1551]: time="2025-01-30T13:52:29.103054705Z" level=info msg="RemovePodSandbox \"ece38a88e72f04c62a1142f5b1a76965910a36093deaae42e3ea7942aae58e69\" returns successfully" Jan 30 13:52:29.103559 containerd[1551]: time="2025-01-30T13:52:29.103537715Z" level=info msg="StopPodSandbox for \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\"" Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.136 [WARNING][5527] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8e61af85-1ee2-489b-aa60-bd8bc7907e82", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003", Pod:"coredns-7db6d8ff4d-58w8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia119b690b57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.136 [INFO][5527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.136 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" iface="eth0" netns="" Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.136 [INFO][5527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.136 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.159 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.159 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.159 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.164 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.164 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.165 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:29.170664 containerd[1551]: 2025-01-30 13:52:29.168 [INFO][5527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:29.171065 containerd[1551]: time="2025-01-30T13:52:29.170702305Z" level=info msg="TearDown network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\" successfully" Jan 30 13:52:29.171065 containerd[1551]: time="2025-01-30T13:52:29.170728345Z" level=info msg="StopPodSandbox for \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\" returns successfully" Jan 30 13:52:29.171240 containerd[1551]: time="2025-01-30T13:52:29.171200133Z" level=info msg="RemovePodSandbox for \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\"" Jan 30 13:52:29.171271 containerd[1551]: time="2025-01-30T13:52:29.171239809Z" level=info msg="Forcibly stopping sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\"" Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.206 [WARNING][5556] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8e61af85-1ee2-489b-aa60-bd8bc7907e82", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffc0e1ea565f3e6d3e9678ce90563365315c983b30c6ec0c170f55b83c764003", Pod:"coredns-7db6d8ff4d-58w8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia119b690b57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.206 [INFO][5556] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.207 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" iface="eth0" netns="" Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.207 [INFO][5556] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.207 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.228 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.228 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.228 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.233 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.233 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" HandleID="k8s-pod-network.5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Workload="localhost-k8s-coredns--7db6d8ff4d--58w8t-eth0" Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.234 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:29.240034 containerd[1551]: 2025-01-30 13:52:29.237 [INFO][5556] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde" Jan 30 13:52:29.240452 containerd[1551]: time="2025-01-30T13:52:29.240054102Z" level=info msg="TearDown network for sandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\" successfully" Jan 30 13:52:29.243844 containerd[1551]: time="2025-01-30T13:52:29.243815967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:29.243889 containerd[1551]: time="2025-01-30T13:52:29.243860242Z" level=info msg="RemovePodSandbox \"5f582fdd3082baafa32efa37b790958ce370e7f17c199d8bcbcdbdd8cdf3ddde\" returns successfully" Jan 30 13:52:29.244209 containerd[1551]: time="2025-01-30T13:52:29.244190171Z" level=info msg="StopPodSandbox for \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\"" Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.279 [WARNING][5585] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0", GenerateName:"calico-apiserver-58896df755-", Namespace:"calico-apiserver", SelfLink:"", UID:"78380a0f-05ec-4046-8a96-ad8ade5588e4", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58896df755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3", Pod:"calico-apiserver-58896df755-xxq4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41bf2ac1572", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.280 [INFO][5585] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.280 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" iface="eth0" netns="" Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.280 [INFO][5585] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.280 [INFO][5585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.300 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.300 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.300 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.305 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.305 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.306 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:29.312200 containerd[1551]: 2025-01-30 13:52:29.309 [INFO][5585] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:29.312724 containerd[1551]: time="2025-01-30T13:52:29.312204949Z" level=info msg="TearDown network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\" successfully" Jan 30 13:52:29.312724 containerd[1551]: time="2025-01-30T13:52:29.312230679Z" level=info msg="StopPodSandbox for \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\" returns successfully" Jan 30 13:52:29.313438 containerd[1551]: time="2025-01-30T13:52:29.313406448Z" level=info msg="RemovePodSandbox for \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\"" Jan 30 13:52:29.313438 containerd[1551]: time="2025-01-30T13:52:29.313434532Z" level=info msg="Forcibly stopping sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\"" Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.349 [WARNING][5614] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0", GenerateName:"calico-apiserver-58896df755-", Namespace:"calico-apiserver", SelfLink:"", UID:"78380a0f-05ec-4046-8a96-ad8ade5588e4", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58896df755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"556ecd725cd2b81dfdf1efaacf32ba99d56b5cae068e7cb5e434c5947feaa3d3", Pod:"calico-apiserver-58896df755-xxq4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41bf2ac1572", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.350 [INFO][5614] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.350 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" iface="eth0" netns="" Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.350 [INFO][5614] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.350 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.372 [INFO][5621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.372 [INFO][5621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.372 [INFO][5621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.377 [WARNING][5621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.378 [INFO][5621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" HandleID="k8s-pod-network.f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Workload="localhost-k8s-calico--apiserver--58896df755--xxq4k-eth0" Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.379 [INFO][5621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:29.385336 containerd[1551]: 2025-01-30 13:52:29.382 [INFO][5614] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f" Jan 30 13:52:29.385773 containerd[1551]: time="2025-01-30T13:52:29.385368707Z" level=info msg="TearDown network for sandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\" successfully" Jan 30 13:52:29.398749 containerd[1551]: time="2025-01-30T13:52:29.398692868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:29.398804 containerd[1551]: time="2025-01-30T13:52:29.398777900Z" level=info msg="RemovePodSandbox \"f80a73a987623b7890558b9128adf5c45c3fe652d15ac34121509df77a78647f\" returns successfully" Jan 30 13:52:32.105481 systemd[1]: Started sshd@15-10.0.0.158:22-10.0.0.1:36354.service - OpenSSH per-connection server daemon (10.0.0.1:36354). Jan 30 13:52:32.138666 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 36354 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:32.140150 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:32.144109 systemd-logind[1529]: New session 16 of user core. Jan 30 13:52:32.157405 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:52:32.265805 sshd[5673]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:32.269226 systemd[1]: sshd@15-10.0.0.158:22-10.0.0.1:36354.service: Deactivated successfully. Jan 30 13:52:32.271342 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:52:32.271392 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:52:32.272455 systemd-logind[1529]: Removed session 16. Jan 30 13:52:35.440839 kubelet[2735]: E0130 13:52:35.440790 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:37.275393 systemd[1]: Started sshd@16-10.0.0.158:22-10.0.0.1:36364.service - OpenSSH per-connection server daemon (10.0.0.1:36364). Jan 30 13:52:37.301730 sshd[5688]: Accepted publickey for core from 10.0.0.1 port 36364 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:37.303400 sshd[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:37.307447 systemd-logind[1529]: New session 17 of user core. Jan 30 13:52:37.319478 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:52:37.437449 sshd[5688]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:37.446463 systemd[1]: Started sshd@17-10.0.0.158:22-10.0.0.1:36370.service - OpenSSH per-connection server daemon (10.0.0.1:36370). Jan 30 13:52:37.447087 systemd[1]: sshd@16-10.0.0.158:22-10.0.0.1:36364.service: Deactivated successfully. Jan 30 13:52:37.450544 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:52:37.451323 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:52:37.452280 systemd-logind[1529]: Removed session 17. Jan 30 13:52:37.476742 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 36370 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:37.478491 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:37.482503 systemd-logind[1529]: New session 18 of user core. Jan 30 13:52:37.489418 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:52:37.680833 sshd[5701]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:37.698375 systemd[1]: Started sshd@18-10.0.0.158:22-10.0.0.1:36384.service - OpenSSH per-connection server daemon (10.0.0.1:36384). Jan 30 13:52:37.698834 systemd[1]: sshd@17-10.0.0.158:22-10.0.0.1:36370.service: Deactivated successfully. Jan 30 13:52:37.702312 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:52:37.702542 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:52:37.704584 systemd-logind[1529]: Removed session 18. Jan 30 13:52:37.725443 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 36384 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:37.726908 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:37.730734 systemd-logind[1529]: New session 19 of user core. Jan 30 13:52:37.741460 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:52:39.159977 sshd[5714]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:39.170487 systemd[1]: Started sshd@19-10.0.0.158:22-10.0.0.1:36392.service - OpenSSH per-connection server daemon (10.0.0.1:36392). Jan 30 13:52:39.170984 systemd[1]: sshd@18-10.0.0.158:22-10.0.0.1:36384.service: Deactivated successfully. Jan 30 13:52:39.177897 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:52:39.179327 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:52:39.181080 systemd-logind[1529]: Removed session 19. Jan 30 13:52:39.216078 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 36392 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:39.217582 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:39.221771 systemd-logind[1529]: New session 20 of user core. Jan 30 13:52:39.226513 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:52:39.430916 sshd[5736]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:39.440514 systemd[1]: Started sshd@20-10.0.0.158:22-10.0.0.1:36402.service - OpenSSH per-connection server daemon (10.0.0.1:36402). Jan 30 13:52:39.441148 systemd[1]: sshd@19-10.0.0.158:22-10.0.0.1:36392.service: Deactivated successfully. Jan 30 13:52:39.443210 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:52:39.445062 systemd-logind[1529]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:52:39.446786 systemd-logind[1529]: Removed session 20. Jan 30 13:52:39.466282 sshd[5752]: Accepted publickey for core from 10.0.0.1 port 36402 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:39.467752 sshd[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:39.471809 systemd-logind[1529]: New session 21 of user core. Jan 30 13:52:39.481462 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:52:39.593576 sshd[5752]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:39.597821 systemd[1]: sshd@20-10.0.0.158:22-10.0.0.1:36402.service: Deactivated successfully. Jan 30 13:52:39.600286 systemd-logind[1529]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:52:39.600397 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:52:39.601575 systemd-logind[1529]: Removed session 21. Jan 30 13:52:41.440984 kubelet[2735]: E0130 13:52:41.440939 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:44.604448 systemd[1]: Started sshd@21-10.0.0.158:22-10.0.0.1:59284.service - OpenSSH per-connection server daemon (10.0.0.1:59284). Jan 30 13:52:44.632412 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 59284 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:44.633857 sshd[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:44.637875 systemd-logind[1529]: New session 22 of user core. Jan 30 13:52:44.647410 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:52:44.748772 sshd[5774]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:44.752546 systemd[1]: sshd@21-10.0.0.158:22-10.0.0.1:59284.service: Deactivated successfully. Jan 30 13:52:44.754990 systemd-logind[1529]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:52:44.755193 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:52:44.756054 systemd-logind[1529]: Removed session 22. Jan 30 13:52:45.735547 containerd[1551]: time="2025-01-30T13:52:45.735462596Z" level=info msg="StopContainer for \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\" with timeout 300 (s)" Jan 30 13:52:45.739753 containerd[1551]: time="2025-01-30T13:52:45.736199805Z" level=info msg="Stop container \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\" with signal terminated" Jan 30 13:52:45.792524 containerd[1551]: time="2025-01-30T13:52:45.792470757Z" level=info msg="StopContainer for \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\" with timeout 30 (s)" Jan 30 13:52:45.793598 containerd[1551]: time="2025-01-30T13:52:45.793575382Z" level=info msg="Stop container \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\" with signal terminated" Jan 30 13:52:45.831226 containerd[1551]: time="2025-01-30T13:52:45.830779591Z" level=info msg="shim disconnected" id=31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25 namespace=k8s.io Jan 30 13:52:45.831226 containerd[1551]: time="2025-01-30T13:52:45.830834515Z" level=warning msg="cleaning up after shim disconnected" id=31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25 namespace=k8s.io Jan 30 13:52:45.831226 containerd[1551]: time="2025-01-30T13:52:45.830845255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:45.834774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25-rootfs.mount: Deactivated successfully. Jan 30 13:52:45.887675 containerd[1551]: time="2025-01-30T13:52:45.887615134Z" level=info msg="StopContainer for \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\" returns successfully" Jan 30 13:52:45.888327 containerd[1551]: time="2025-01-30T13:52:45.888292498Z" level=info msg="StopPodSandbox for \"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94\"" Jan 30 13:52:45.889217 containerd[1551]: time="2025-01-30T13:52:45.888336552Z" level=info msg="Container to stop \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:45.892992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94-shm.mount: Deactivated successfully. Jan 30 13:52:45.921393 containerd[1551]: time="2025-01-30T13:52:45.921332854Z" level=info msg="shim disconnected" id=cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94 namespace=k8s.io Jan 30 13:52:45.921393 containerd[1551]: time="2025-01-30T13:52:45.921384612Z" level=warning msg="cleaning up after shim disconnected" id=cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94 namespace=k8s.io Jan 30 13:52:45.921393 containerd[1551]: time="2025-01-30T13:52:45.921393058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:45.921876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94-rootfs.mount: Deactivated successfully. Jan 30 13:52:45.935883 containerd[1551]: time="2025-01-30T13:52:45.935817492Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:52:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:52:46.102064 systemd-networkd[1246]: cali42c7947cdb6: Link DOWN Jan 30 13:52:46.102074 systemd-networkd[1246]: cali42c7947cdb6: Lost carrier Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.100 [INFO][5900] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.100 [INFO][5900] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" iface="eth0" netns="/var/run/netns/cni-8e8f0d05-f7a9-b282-cac0-05512911a993" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.101 [INFO][5900] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" iface="eth0" netns="/var/run/netns/cni-8e8f0d05-f7a9-b282-cac0-05512911a993" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.117 [INFO][5900] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" after=16.522744ms iface="eth0" netns="/var/run/netns/cni-8e8f0d05-f7a9-b282-cac0-05512911a993" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.118 [INFO][5900] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.118 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.141 [INFO][5913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" HandleID="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.141 [INFO][5913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.141 [INFO][5913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.173 [INFO][5913] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" HandleID="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.173 [INFO][5913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" HandleID="k8s-pod-network.cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Workload="localhost-k8s-calico--kube--controllers--5c7c58cdc5--s4npw-eth0" Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.174 [INFO][5913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:46.179947 containerd[1551]: 2025-01-30 13:52:46.177 [INFO][5900] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94" Jan 30 13:52:46.184088 containerd[1551]: time="2025-01-30T13:52:46.183269761Z" level=info msg="TearDown network for sandbox \"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94\" successfully" Jan 30 13:52:46.184088 containerd[1551]: time="2025-01-30T13:52:46.183312613Z" level=info msg="StopPodSandbox for \"cec566c65eb85cdc73ed807540dcd59d9a53e9b83ae1d296c1306c4e43779c94\" returns successfully" Jan 30 13:52:46.183704 systemd[1]: run-netns-cni\x2d8e8f0d05\x2df7a9\x2db282\x2dcac0\x2d05512911a993.mount: Deactivated successfully. Jan 30 13:52:46.271784 kubelet[2735]: I0130 13:52:46.271746 2735 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhj82\" (UniqueName: \"kubernetes.io/projected/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0-kube-api-access-jhj82\") pod \"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0\" (UID: \"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0\") " Jan 30 13:52:46.272306 kubelet[2735]: I0130 13:52:46.272181 2735 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0-tigera-ca-bundle\") pod \"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0\" (UID: \"a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0\") " Jan 30 13:52:46.277365 kubelet[2735]: I0130 13:52:46.277307 2735 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0-kube-api-access-jhj82" (OuterVolumeSpecName: "kube-api-access-jhj82") pod "a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0" (UID: "a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0"). InnerVolumeSpecName "kube-api-access-jhj82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:46.277997 systemd[1]: var-lib-kubelet-pods-a5b2cb32\x2de9dd\x2d4d2a\x2d8428\x2d4543bb3ff2f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djhj82.mount: Deactivated successfully. Jan 30 13:52:46.279062 kubelet[2735]: I0130 13:52:46.279011 2735 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0" (UID: "a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:52:46.281916 systemd[1]: var-lib-kubelet-pods-a5b2cb32\x2de9dd\x2d4d2a\x2d8428\x2d4543bb3ff2f0-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 30 13:52:46.364987 kubelet[2735]: I0130 13:52:46.364275 2735 scope.go:117] "RemoveContainer" containerID="31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25" Jan 30 13:52:46.367751 containerd[1551]: time="2025-01-30T13:52:46.367714105Z" level=info msg="RemoveContainer for \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\"" Jan 30 13:52:46.372770 containerd[1551]: time="2025-01-30T13:52:46.372363700Z" level=info msg="RemoveContainer for \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\" returns successfully" Jan 30 13:52:46.372898 kubelet[2735]: I0130 13:52:46.372428 2735 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jhj82\" (UniqueName: \"kubernetes.io/projected/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0-kube-api-access-jhj82\") on node \"localhost\" DevicePath \"\"" Jan 30 13:52:46.372898 kubelet[2735]: I0130 13:52:46.372444 2735 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 30 13:52:46.373303 kubelet[2735]: I0130 13:52:46.373185 2735 scope.go:117] "RemoveContainer" containerID="31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25" Jan 30 13:52:46.380034 containerd[1551]: time="2025-01-30T13:52:46.373404564Z" level=error msg="ContainerStatus for \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\": not found" Jan 30 13:52:46.380239 kubelet[2735]: E0130 13:52:46.380215 2735 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\": not found" containerID="31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25" Jan 30 13:52:46.380279 kubelet[2735]: I0130 13:52:46.380245 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25"} err="failed to get container status \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\": rpc error: code = NotFound desc = an error occurred when try to find container \"31643f3c8e8ad1393c4b270f104f0a58b1a4bdebdffd4462f807f64139fafc25\": not found" Jan 30 13:52:46.398963 kubelet[2735]: I0130 13:52:46.398922 2735 topology_manager.go:215] "Topology Admit Handler" podUID="216f1fba-6647-4748-bc1f-be56f29a5400" podNamespace="calico-system" podName="calico-kube-controllers-5766c87455-96bwn" Jan 30 13:52:46.399114 kubelet[2735]: E0130 13:52:46.398986 2735 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0" containerName="calico-kube-controllers" Jan 30 13:52:46.407433 kubelet[2735]: I0130 13:52:46.407401 2735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0" containerName="calico-kube-controllers" Jan 30 13:52:46.440216 kubelet[2735]: E0130 13:52:46.440181 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:46.442296 kubelet[2735]: I0130 13:52:46.442259 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0" path="/var/lib/kubelet/pods/a5b2cb32-e9dd-4d2a-8428-4543bb3ff2f0/volumes" Jan 30 13:52:46.473292 kubelet[2735]: I0130 13:52:46.473252 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/216f1fba-6647-4748-bc1f-be56f29a5400-tigera-ca-bundle\") pod \"calico-kube-controllers-5766c87455-96bwn\" (UID: \"216f1fba-6647-4748-bc1f-be56f29a5400\") " pod="calico-system/calico-kube-controllers-5766c87455-96bwn" Jan 30 13:52:46.473462 kubelet[2735]: I0130 13:52:46.473294 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hs44\" (UniqueName: \"kubernetes.io/projected/216f1fba-6647-4748-bc1f-be56f29a5400-kube-api-access-6hs44\") pod \"calico-kube-controllers-5766c87455-96bwn\" (UID: \"216f1fba-6647-4748-bc1f-be56f29a5400\") " pod="calico-system/calico-kube-controllers-5766c87455-96bwn" Jan 30 13:52:46.713370 containerd[1551]: time="2025-01-30T13:52:46.713240012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5766c87455-96bwn,Uid:216f1fba-6647-4748-bc1f-be56f29a5400,Namespace:calico-system,Attempt:0,}" Jan 30 13:52:46.814512 systemd-networkd[1246]: cali1a8bc72747a: Link UP Jan 30 13:52:46.814730 systemd-networkd[1246]: cali1a8bc72747a: Gained carrier Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.752 [INFO][5926] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0 calico-kube-controllers-5766c87455- calico-system 216f1fba-6647-4748-bc1f-be56f29a5400 1270 0 2025-01-30 13:52:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5766c87455 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5766c87455-96bwn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1a8bc72747a [] []}} ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Namespace="calico-system" Pod="calico-kube-controllers-5766c87455-96bwn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.753 [INFO][5926] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Namespace="calico-system" Pod="calico-kube-controllers-5766c87455-96bwn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.777 [INFO][5938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" HandleID="k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Workload="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.785 [INFO][5938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" HandleID="k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Workload="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e9cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5766c87455-96bwn", "timestamp":"2025-01-30 13:52:46.777776161 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.785 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.785 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.785 [INFO][5938] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.787 [INFO][5938] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.791 [INFO][5938] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.794 [INFO][5938] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.795 [INFO][5938] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.797 [INFO][5938] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.797 [INFO][5938] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.799 [INFO][5938] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46 Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.802 [INFO][5938] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.808 [INFO][5938] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.808 [INFO][5938] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" host="localhost" Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.808 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:46.824809 containerd[1551]: 2025-01-30 13:52:46.808 [INFO][5938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" HandleID="k8s-pod-network.eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Workload="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" Jan 30 13:52:46.825627 containerd[1551]: 2025-01-30 13:52:46.811 [INFO][5926] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Namespace="calico-system" Pod="calico-kube-controllers-5766c87455-96bwn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0", GenerateName:"calico-kube-controllers-5766c87455-", Namespace:"calico-system", SelfLink:"", UID:"216f1fba-6647-4748-bc1f-be56f29a5400", ResourceVersion:"1270", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 52, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5766c87455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5766c87455-96bwn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a8bc72747a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:46.825627 containerd[1551]: 2025-01-30 13:52:46.811 [INFO][5926] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Namespace="calico-system" Pod="calico-kube-controllers-5766c87455-96bwn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" Jan 30 13:52:46.825627 containerd[1551]: 2025-01-30 13:52:46.811 [INFO][5926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a8bc72747a ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Namespace="calico-system" Pod="calico-kube-controllers-5766c87455-96bwn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" Jan 30 13:52:46.825627 containerd[1551]: 2025-01-30 13:52:46.813 [INFO][5926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Namespace="calico-system" Pod="calico-kube-controllers-5766c87455-96bwn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" Jan 30 13:52:46.825627 containerd[1551]: 2025-01-30 13:52:46.813 [INFO][5926] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Namespace="calico-system" Pod="calico-kube-controllers-5766c87455-96bwn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0", GenerateName:"calico-kube-controllers-5766c87455-", Namespace:"calico-system", SelfLink:"", UID:"216f1fba-6647-4748-bc1f-be56f29a5400", ResourceVersion:"1270", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 52, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5766c87455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46", Pod:"calico-kube-controllers-5766c87455-96bwn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a8bc72747a", MAC:"5a:63:d4:f1:b7:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:46.825627 containerd[1551]: 2025-01-30 13:52:46.820 [INFO][5926] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46" Namespace="calico-system" Pod="calico-kube-controllers-5766c87455-96bwn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5766c87455--96bwn-eth0" Jan 30 13:52:46.845971 containerd[1551]: time="2025-01-30T13:52:46.845853754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:46.845971 containerd[1551]: time="2025-01-30T13:52:46.845922173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:46.845971 containerd[1551]: time="2025-01-30T13:52:46.845940469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:46.846234 containerd[1551]: time="2025-01-30T13:52:46.846068210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:46.874535 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:46.899226 containerd[1551]: time="2025-01-30T13:52:46.899123794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5766c87455-96bwn,Uid:216f1fba-6647-4748-bc1f-be56f29a5400,Namespace:calico-system,Attempt:0,} returns sandbox id \"eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46\"" Jan 30 13:52:46.906598 containerd[1551]: time="2025-01-30T13:52:46.906549803Z" level=info msg="CreateContainer within sandbox \"eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:52:46.917908 containerd[1551]: time="2025-01-30T13:52:46.917877419Z" level=info msg="CreateContainer within sandbox \"eee37255f4f5d15a95b12e72b9c7b289ac3dde2543089b51821b81141e976d46\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8edb76425f5a81737bb1e720695d3feb65476dbd205d1eebb2880335e43913c6\"" Jan 30 13:52:46.918481 containerd[1551]: time="2025-01-30T13:52:46.918297063Z" level=info msg="StartContainer for \"8edb76425f5a81737bb1e720695d3feb65476dbd205d1eebb2880335e43913c6\"" Jan 30 13:52:46.984946 containerd[1551]: time="2025-01-30T13:52:46.984858633Z" level=info msg="StartContainer for \"8edb76425f5a81737bb1e720695d3feb65476dbd205d1eebb2880335e43913c6\" returns successfully" Jan 30 13:52:47.433869 kubelet[2735]: I0130 13:52:47.433350 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5766c87455-96bwn" podStartSLOduration=1.433327988 podStartE2EDuration="1.433327988s" podCreationTimestamp="2025-01-30 13:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:47.379771429 +0000 UTC m=+79.016396479" watchObservedRunningTime="2025-01-30 13:52:47.433327988 +0000 UTC m=+79.069953018" Jan 30 13:52:48.834287 systemd-networkd[1246]: cali1a8bc72747a: Gained IPv6LL Jan 30 13:52:49.758366 systemd[1]: Started sshd@22-10.0.0.158:22-10.0.0.1:59300.service - OpenSSH per-connection server daemon (10.0.0.1:59300). Jan 30 13:52:49.787628 sshd[6103]: Accepted publickey for core from 10.0.0.1 port 59300 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:49.789414 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:49.794066 systemd-logind[1529]: New session 23 of user core. Jan 30 13:52:49.801507 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:52:49.910034 sshd[6103]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:49.913708 systemd[1]: sshd@22-10.0.0.158:22-10.0.0.1:59300.service: Deactivated successfully. Jan 30 13:52:49.915903 systemd-logind[1529]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:52:49.916061 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:52:49.916959 systemd-logind[1529]: Removed session 23. Jan 30 13:52:50.440955 kubelet[2735]: E0130 13:52:50.440924 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:50.739326 containerd[1551]: time="2025-01-30T13:52:50.738705752Z" level=info msg="shim disconnected" id=9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40 namespace=k8s.io Jan 30 13:52:50.739326 containerd[1551]: time="2025-01-30T13:52:50.738757119Z" level=warning msg="cleaning up after shim disconnected" id=9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40 namespace=k8s.io Jan 30 13:52:50.739326 containerd[1551]: time="2025-01-30T13:52:50.738765455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:50.742567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40-rootfs.mount: Deactivated successfully. Jan 30 13:52:50.771658 containerd[1551]: time="2025-01-30T13:52:50.771605507Z" level=info msg="StopContainer for \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\" returns successfully" Jan 30 13:52:50.776580 containerd[1551]: time="2025-01-30T13:52:50.776545725Z" level=info msg="StopPodSandbox for \"e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073\"" Jan 30 13:52:50.776626 containerd[1551]: time="2025-01-30T13:52:50.776586362Z" level=info msg="Container to stop \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:50.781361 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073-shm.mount: Deactivated successfully. Jan 30 13:52:50.801803 containerd[1551]: time="2025-01-30T13:52:50.801733207Z" level=info msg="shim disconnected" id=e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073 namespace=k8s.io Jan 30 13:52:50.802126 containerd[1551]: time="2025-01-30T13:52:50.802031302Z" level=warning msg="cleaning up after shim disconnected" id=e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073 namespace=k8s.io Jan 30 13:52:50.802126 containerd[1551]: time="2025-01-30T13:52:50.802048104Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:50.804757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073-rootfs.mount: Deactivated successfully. Jan 30 13:52:50.823812 containerd[1551]: time="2025-01-30T13:52:50.823772458Z" level=info msg="TearDown network for sandbox \"e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073\" successfully" Jan 30 13:52:50.823812 containerd[1551]: time="2025-01-30T13:52:50.823799460Z" level=info msg="StopPodSandbox for \"e9810399ada57c6664a10eb1f0e253ec34461892bd23315e933b4340290ac073\" returns successfully" Jan 30 13:52:50.899189 kubelet[2735]: I0130 13:52:50.899104 2735 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9b016edd-7188-40fb-b022-a7fa47abad2e-typha-certs\") pod \"9b016edd-7188-40fb-b022-a7fa47abad2e\" (UID: \"9b016edd-7188-40fb-b022-a7fa47abad2e\") " Jan 30 13:52:50.899189 kubelet[2735]: I0130 13:52:50.899156 2735 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b016edd-7188-40fb-b022-a7fa47abad2e-tigera-ca-bundle\") pod \"9b016edd-7188-40fb-b022-a7fa47abad2e\" (UID: \"9b016edd-7188-40fb-b022-a7fa47abad2e\") " Jan 30 13:52:50.899416 kubelet[2735]: I0130 13:52:50.899208 2735 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns8kb\" (UniqueName: \"kubernetes.io/projected/9b016edd-7188-40fb-b022-a7fa47abad2e-kube-api-access-ns8kb\") pod \"9b016edd-7188-40fb-b022-a7fa47abad2e\" (UID: \"9b016edd-7188-40fb-b022-a7fa47abad2e\") " Jan 30 13:52:50.902838 kubelet[2735]: I0130 13:52:50.902777 2735 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b016edd-7188-40fb-b022-a7fa47abad2e-kube-api-access-ns8kb" (OuterVolumeSpecName: "kube-api-access-ns8kb") pod "9b016edd-7188-40fb-b022-a7fa47abad2e" (UID: "9b016edd-7188-40fb-b022-a7fa47abad2e"). InnerVolumeSpecName "kube-api-access-ns8kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:50.903388 kubelet[2735]: I0130 13:52:50.903006 2735 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b016edd-7188-40fb-b022-a7fa47abad2e-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "9b016edd-7188-40fb-b022-a7fa47abad2e" (UID: "9b016edd-7188-40fb-b022-a7fa47abad2e"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:52:50.904635 kubelet[2735]: I0130 13:52:50.904603 2735 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b016edd-7188-40fb-b022-a7fa47abad2e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9b016edd-7188-40fb-b022-a7fa47abad2e" (UID: "9b016edd-7188-40fb-b022-a7fa47abad2e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:52:50.905237 systemd[1]: var-lib-kubelet-pods-9b016edd\x2d7188\x2d40fb\x2db022\x2da7fa47abad2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dns8kb.mount: Deactivated successfully. Jan 30 13:52:50.905438 systemd[1]: var-lib-kubelet-pods-9b016edd\x2d7188\x2d40fb\x2db022\x2da7fa47abad2e-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 30 13:52:50.908871 systemd[1]: var-lib-kubelet-pods-9b016edd\x2d7188\x2d40fb\x2db022\x2da7fa47abad2e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 30 13:52:50.999764 kubelet[2735]: I0130 13:52:50.999663 2735 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9b016edd-7188-40fb-b022-a7fa47abad2e-typha-certs\") on node \"localhost\" DevicePath \"\"" Jan 30 13:52:50.999764 kubelet[2735]: I0130 13:52:50.999694 2735 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b016edd-7188-40fb-b022-a7fa47abad2e-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 30 13:52:50.999764 kubelet[2735]: I0130 13:52:50.999707 2735 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ns8kb\" (UniqueName: \"kubernetes.io/projected/9b016edd-7188-40fb-b022-a7fa47abad2e-kube-api-access-ns8kb\") on node \"localhost\" DevicePath \"\"" Jan 30 13:52:51.378927 kubelet[2735]: I0130 13:52:51.378829 2735 scope.go:117] "RemoveContainer" containerID="9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40" Jan 30 13:52:51.380994 containerd[1551]: time="2025-01-30T13:52:51.380898286Z" level=info msg="RemoveContainer for \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\"" Jan 30 13:52:51.386233 containerd[1551]: time="2025-01-30T13:52:51.386210359Z" level=info msg="RemoveContainer for \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\" returns successfully" Jan 30 13:52:51.386421 kubelet[2735]: I0130 13:52:51.386376 2735 scope.go:117] "RemoveContainer" containerID="9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40" Jan 30 13:52:51.386662 containerd[1551]: time="2025-01-30T13:52:51.386614626Z" level=error msg="ContainerStatus for \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\": not found" Jan 30 13:52:51.386823 kubelet[2735]: E0130 13:52:51.386787 2735 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\": not found" containerID="9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40" Jan 30 13:52:51.386858 kubelet[2735]: I0130 13:52:51.386834 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40"} err="failed to get container status \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a7c0a443e69fe761a9393f55485fa98dcfbd81b4de2378c8600a55fc0604e40\": not found" Jan 30 13:52:52.442831 kubelet[2735]: I0130 13:52:52.442790 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b016edd-7188-40fb-b022-a7fa47abad2e" path="/var/lib/kubelet/pods/9b016edd-7188-40fb-b022-a7fa47abad2e/volumes" Jan 30 13:52:54.920367 systemd[1]: Started sshd@23-10.0.0.158:22-10.0.0.1:57974.service - OpenSSH per-connection server daemon (10.0.0.1:57974). Jan 30 13:52:54.946553 sshd[6328]: Accepted publickey for core from 10.0.0.1 port 57974 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:52:54.947994 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:54.951699 systemd-logind[1529]: New session 24 of user core. Jan 30 13:52:54.967401 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:52:55.068635 sshd[6328]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:55.072764 systemd[1]: sshd@23-10.0.0.158:22-10.0.0.1:57974.service: Deactivated successfully. Jan 30 13:52:55.074965 systemd-logind[1529]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:52:55.075034 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:52:55.076233 systemd-logind[1529]: Removed session 24. Jan 30 13:53:00.079373 systemd[1]: Started sshd@24-10.0.0.158:22-10.0.0.1:57988.service - OpenSSH per-connection server daemon (10.0.0.1:57988). Jan 30 13:53:00.105508 sshd[6432]: Accepted publickey for core from 10.0.0.1 port 57988 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:53:00.107011 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:00.111456 systemd-logind[1529]: New session 25 of user core. Jan 30 13:53:00.126409 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:53:00.230411 sshd[6432]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:00.234039 systemd[1]: sshd@24-10.0.0.158:22-10.0.0.1:57988.service: Deactivated successfully. Jan 30 13:53:00.236465 systemd-logind[1529]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:53:00.236481 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:53:00.237528 systemd-logind[1529]: Removed session 25.