Jan 30 13:48:00.920680 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:48:00.920709 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:00.920724 kernel: BIOS-provided physical RAM map: Jan 30 13:48:00.920732 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:48:00.920739 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:48:00.920747 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:48:00.920756 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:48:00.920765 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:48:00.920773 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 30 13:48:00.920781 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 30 13:48:00.920792 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 30 13:48:00.920800 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 30 13:48:00.920808 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 30 13:48:00.920817 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 30 13:48:00.920827 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 30 13:48:00.920836 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:48:00.920847 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 30 13:48:00.920856 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 30 13:48:00.920865 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:48:00.920873 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:48:00.920882 kernel: NX (Execute Disable) protection: active Jan 30 13:48:00.920900 kernel: APIC: Static calls initialized Jan 30 13:48:00.920925 kernel: efi: EFI v2.7 by EDK II Jan 30 13:48:00.920950 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 30 13:48:00.920959 kernel: SMBIOS 2.8 present. Jan 30 13:48:00.920968 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 30 13:48:00.920977 kernel: Hypervisor detected: KVM Jan 30 13:48:00.920990 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:48:00.920999 kernel: kvm-clock: using sched offset of 4034092137 cycles Jan 30 13:48:00.921009 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:48:00.921019 kernel: tsc: Detected 2794.750 MHz processor Jan 30 13:48:00.921029 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:48:00.921044 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:48:00.921053 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 30 13:48:00.921063 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:48:00.921073 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:48:00.921086 kernel: Using GB pages for direct mapping Jan 30 13:48:00.921095 kernel: Secure boot disabled Jan 30 13:48:00.921105 kernel: ACPI: Early table checksum verification disabled Jan 30 13:48:00.921114 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:48:00.921129 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:48:00.921139 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:48:00.921149 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:48:00.921162 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:48:00.921171 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:48:00.921181 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:48:00.921191 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:48:00.921201 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:48:00.921211 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:48:00.921221 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:48:00.921234 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:48:00.921244 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:48:00.921254 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:48:00.921264 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:48:00.921273 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:48:00.921283 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:48:00.921293 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:48:00.921303 kernel: No NUMA configuration found Jan 30 13:48:00.921313 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 30 13:48:00.921325 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 30 13:48:00.921335 kernel: Zone ranges: Jan 30 13:48:00.921345 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:48:00.921355 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 30 13:48:00.921365 kernel: Normal empty Jan 30 13:48:00.921375 kernel: Movable zone start for each node Jan 30 13:48:00.921384 kernel: Early memory node ranges Jan 30 13:48:00.921394 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:48:00.921404 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:48:00.921413 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:48:00.921427 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 30 13:48:00.921436 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 30 13:48:00.921446 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 30 13:48:00.921456 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 30 13:48:00.921466 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:48:00.921476 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:48:00.921486 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:48:00.921496 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:48:00.921505 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 30 13:48:00.921519 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 30 13:48:00.921529 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 30 13:48:00.921539 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:48:00.921549 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:48:00.921559 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:48:00.921569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:48:00.921579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:48:00.921589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:48:00.921599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:48:00.921611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:48:00.921621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:48:00.921631 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:48:00.921651 kernel: TSC deadline timer available Jan 30 13:48:00.921661 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:48:00.921850 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:48:00.921861 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:48:00.921870 kernel: kvm-guest: setup PV sched yield Jan 30 13:48:00.921881 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:48:00.921890 kernel: Booting paravirtualized kernel on KVM Jan 30 13:48:00.921905 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:48:00.921915 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:48:00.921925 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:48:00.921935 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:48:00.921945 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:48:00.921955 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:48:00.921965 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:48:00.921976 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:00.921990 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:48:00.922025 kernel: random: crng init done Jan 30 13:48:00.922036 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:48:00.922047 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:48:00.922057 kernel: Fallback order for Node 0: 0 Jan 30 13:48:00.922067 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 30 13:48:00.922076 kernel: Policy zone: DMA32 Jan 30 13:48:00.922091 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:48:00.922101 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 30 13:48:00.922115 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:48:00.922125 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:48:00.922135 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:48:00.922145 kernel: Dynamic Preempt: voluntary Jan 30 13:48:00.922165 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:48:00.922179 kernel: rcu: RCU event tracing is enabled. Jan 30 13:48:00.922190 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:48:00.922200 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:48:00.922210 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:48:00.922221 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:48:00.922231 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:48:00.922241 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:48:00.922255 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:48:00.922265 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:48:00.922275 kernel: Console: colour dummy device 80x25 Jan 30 13:48:00.922285 kernel: printk: console [ttyS0] enabled Jan 30 13:48:00.922295 kernel: ACPI: Core revision 20230628 Jan 30 13:48:00.922309 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:48:00.922319 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:48:00.922329 kernel: x2apic enabled Jan 30 13:48:00.922339 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:48:00.922349 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:48:00.922360 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:48:00.922370 kernel: kvm-guest: setup PV IPIs Jan 30 13:48:00.922380 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:48:00.922390 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:48:00.922404 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 30 13:48:00.922415 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:48:00.922425 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:48:00.922435 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:48:00.922446 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:48:00.922456 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:48:00.922466 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:48:00.922477 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:48:00.922487 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:48:00.922501 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:48:00.922511 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:48:00.922522 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:48:00.922532 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:48:00.922544 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:48:00.922554 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:48:00.922565 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:48:00.922575 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:48:00.922590 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:48:00.922602 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:48:00.922614 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:48:00.922626 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:48:00.922636 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:48:00.922658 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:48:00.922683 kernel: landlock: Up and running. Jan 30 13:48:00.922693 kernel: SELinux: Initializing. Jan 30 13:48:00.922704 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:48:00.922719 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:48:00.922730 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:48:00.922741 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:48:00.922752 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:48:00.922762 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:48:00.922773 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:48:00.922783 kernel: ... version: 0 Jan 30 13:48:00.922793 kernel: ... bit width: 48 Jan 30 13:48:00.922804 kernel: ... generic registers: 6 Jan 30 13:48:00.922818 kernel: ... value mask: 0000ffffffffffff Jan 30 13:48:00.922828 kernel: ... max period: 00007fffffffffff Jan 30 13:48:00.922838 kernel: ... fixed-purpose events: 0 Jan 30 13:48:00.922849 kernel: ... event mask: 000000000000003f Jan 30 13:48:00.922859 kernel: signal: max sigframe size: 1776 Jan 30 13:48:00.922869 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:48:00.922880 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:48:00.922891 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:48:00.922901 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:48:00.922914 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:48:00.922925 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:48:00.922935 kernel: smpboot: Max logical packages: 1 Jan 30 13:48:00.922946 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 30 13:48:00.922956 kernel: devtmpfs: initialized Jan 30 13:48:00.922966 kernel: x86/mm: Memory block size: 128MB Jan 30 13:48:00.922977 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:48:00.922987 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:48:00.922998 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 30 13:48:00.923012 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:48:00.923022 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:48:00.923033 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:48:00.923044 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:48:00.923054 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:48:00.923064 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:48:00.923075 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:48:00.923085 kernel: audit: type=2000 audit(1738244880.476:1): state=initialized audit_enabled=0 res=1 Jan 30 13:48:00.923095 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:48:00.923109 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:48:00.923119 kernel: cpuidle: using governor menu Jan 30 13:48:00.923130 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:48:00.923140 kernel: dca service started, version 1.12.1 Jan 30 13:48:00.923151 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:48:00.923161 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:48:00.923172 kernel: PCI: Using configuration type 1 for base access Jan 30 13:48:00.923182 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:48:00.923193 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:48:00.923206 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:48:00.923217 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:48:00.923227 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:48:00.923238 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:48:00.923248 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:48:00.923258 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:48:00.923269 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:48:00.923279 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:48:00.923290 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:48:00.923303 kernel: ACPI: Interpreter enabled Jan 30 13:48:00.923314 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:48:00.923324 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:48:00.923335 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:48:00.923345 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:48:00.923355 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:48:00.923366 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:48:00.923625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:48:00.923827 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:48:00.923985 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:48:00.924000 kernel: PCI host bridge to bus 0000:00 Jan 30 13:48:00.924157 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:48:00.924291 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:48:00.924434 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:48:00.924576 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:48:00.924749 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:48:00.924893 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 30 13:48:00.925037 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:48:00.925215 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:48:00.925391 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:48:00.925552 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:48:00.925753 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:48:00.925919 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:48:00.926079 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:48:00.926239 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:48:00.926412 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:48:00.926572 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:48:00.926772 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:48:00.926941 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 30 13:48:00.927110 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:48:00.927269 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:48:00.927428 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:48:00.927575 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 30 13:48:00.927807 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:48:00.927969 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:48:00.928136 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:48:00.928296 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 30 13:48:00.928454 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:48:00.928625 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:48:00.928813 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:48:00.928974 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:48:00.929128 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:48:00.929280 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:48:00.929439 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:48:00.929588 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:48:00.929603 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:48:00.929614 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:48:00.929625 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:48:00.929635 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:48:00.929660 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:48:00.929692 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:48:00.929703 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:48:00.929713 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:48:00.929724 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:48:00.929734 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:48:00.929745 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:48:00.929755 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:48:00.929766 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:48:00.929782 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:48:00.929793 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:48:00.929803 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:48:00.929813 kernel: iommu: Default domain type: Translated Jan 30 13:48:00.929824 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:48:00.929834 kernel: efivars: Registered efivars operations Jan 30 13:48:00.929845 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:48:00.929855 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:48:00.929865 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:48:00.929880 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 30 13:48:00.929890 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 30 13:48:00.929900 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 30 13:48:00.930058 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:48:00.930210 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:48:00.930371 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:48:00.930388 kernel: vgaarb: loaded Jan 30 13:48:00.930399 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:48:00.930410 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:48:00.930426 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:48:00.930437 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:48:00.930448 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:48:00.930458 kernel: pnp: PnP ACPI init Jan 30 13:48:00.930649 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:48:00.930744 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:48:00.930756 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:48:00.930766 kernel: NET: Registered PF_INET protocol family Jan 30 13:48:00.930782 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:48:00.930793 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:48:00.930803 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:48:00.930814 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:48:00.930825 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:48:00.930835 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:48:00.930846 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:48:00.930857 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:48:00.930867 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:48:00.930882 kernel: NET: Registered PF_XDP protocol family Jan 30 13:48:00.931042 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:48:00.931192 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:48:00.931334 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:48:00.931471 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:48:00.931608 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:48:00.931788 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:48:00.931944 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:48:00.932090 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 30 13:48:00.932105 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:48:00.932116 kernel: Initialise system trusted keyrings Jan 30 13:48:00.932126 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:48:00.932137 kernel: Key type asymmetric registered Jan 30 13:48:00.932147 kernel: Asymmetric key parser 'x509' registered Jan 30 13:48:00.932157 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:48:00.932167 kernel: io scheduler mq-deadline registered Jan 30 13:48:00.932180 kernel: io scheduler kyber registered Jan 30 13:48:00.932187 kernel: io scheduler bfq registered Jan 30 13:48:00.932195 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:48:00.932203 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:48:00.932211 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:48:00.932220 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:48:00.932230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:48:00.932240 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:48:00.932251 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:48:00.932265 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:48:00.932275 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:48:00.932444 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:48:00.932461 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:48:00.932601 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:48:00.932848 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:48:00 UTC (1738244880) Jan 30 13:48:00.933004 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:48:00.933020 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:48:00.933036 kernel: efifb: probing for efifb Jan 30 13:48:00.933046 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 30 13:48:00.933056 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 30 13:48:00.933072 kernel: efifb: scrolling: redraw Jan 30 13:48:00.933082 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 30 13:48:00.933093 kernel: Console: switching to colour frame buffer device 100x37 Jan 30 13:48:00.933127 kernel: fb0: EFI VGA frame buffer device Jan 30 13:48:00.933140 kernel: pstore: Using crash dump compression: deflate Jan 30 13:48:00.933151 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:48:00.933163 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:48:00.933173 kernel: Segment Routing with IPv6 Jan 30 13:48:00.933184 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:48:00.933195 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:48:00.933206 kernel: Key type dns_resolver registered Jan 30 13:48:00.933216 kernel: IPI shorthand broadcast: enabled Jan 30 13:48:00.933228 kernel: sched_clock: Marking stable (592002941, 113881297)->(754339356, -48455118) Jan 30 13:48:00.933239 kernel: registered taskstats version 1 Jan 30 13:48:00.933250 kernel: Loading compiled-in X.509 certificates Jan 30 13:48:00.933264 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:48:00.933274 kernel: Key type .fscrypt registered Jan 30 13:48:00.933285 kernel: Key type fscrypt-provisioning registered Jan 30 13:48:00.933295 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:48:00.933306 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:48:00.933317 kernel: ima: No architecture policies found Jan 30 13:48:00.933328 kernel: clk: Disabling unused clocks Jan 30 13:48:00.933338 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:48:00.933352 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:48:00.933366 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:48:00.933377 kernel: Run /init as init process Jan 30 13:48:00.933388 kernel: with arguments: Jan 30 13:48:00.933398 kernel: /init Jan 30 13:48:00.933406 kernel: with environment: Jan 30 13:48:00.933414 kernel: HOME=/ Jan 30 13:48:00.933422 kernel: TERM=linux Jan 30 13:48:00.933431 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:48:00.933442 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:48:00.933455 systemd[1]: Detected virtualization kvm. Jan 30 13:48:00.933464 systemd[1]: Detected architecture x86-64. Jan 30 13:48:00.933472 systemd[1]: Running in initrd. Jan 30 13:48:00.933482 systemd[1]: No hostname configured, using default hostname. Jan 30 13:48:00.933493 systemd[1]: Hostname set to . Jan 30 13:48:00.933501 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:48:00.933509 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:48:00.933517 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:48:00.933526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:48:00.933535 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:48:00.933544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:48:00.933553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:48:00.933564 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:48:00.933574 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:48:00.933583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:48:00.933591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:48:00.933599 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:48:00.933608 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:48:00.933616 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:48:00.933627 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:48:00.933635 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:48:00.933653 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:48:00.933674 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:48:00.933683 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:48:00.933691 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:48:00.933700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:48:00.933708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:48:00.933721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:48:00.933729 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:48:00.933738 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:48:00.933746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:48:00.933754 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:48:00.933763 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:48:00.933771 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:48:00.933783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:48:00.933794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:48:00.933809 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:48:00.933820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:48:00.933832 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:48:00.933844 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:48:00.933888 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 13:48:00.933916 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:48:00.933928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:48:00.933940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:48:00.933955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:48:00.933967 systemd-journald[192]: Journal started Jan 30 13:48:00.933992 systemd-journald[192]: Runtime Journal (/run/log/journal/59bf2c08464949e6bb46afaf44e68e6b) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:48:00.919687 systemd-modules-load[193]: Inserted module 'overlay' Jan 30 13:48:00.940768 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:48:00.940749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:48:00.947249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:48:00.953602 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:48:00.956759 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:48:00.957539 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 30 13:48:00.960596 kernel: Bridge firewalling registered Jan 30 13:48:00.958919 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:48:00.962266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:48:00.965276 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:48:00.970124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:48:00.973266 dracut-cmdline[222]: dracut-dracut-053 Jan 30 13:48:00.976831 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:00.995232 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:48:01.008881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:48:01.038950 systemd-resolved[255]: Positive Trust Anchors: Jan 30 13:48:01.038967 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:48:01.038998 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:48:01.041538 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 30 13:48:01.042694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:48:01.048634 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:48:01.082719 kernel: SCSI subsystem initialized Jan 30 13:48:01.091710 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:48:01.102715 kernel: iscsi: registered transport (tcp) Jan 30 13:48:01.122843 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:48:01.122917 kernel: QLogic iSCSI HBA Driver Jan 30 13:48:01.167895 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:48:01.174852 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:48:01.197846 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:48:01.197885 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:48:01.198869 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:48:01.238702 kernel: raid6: avx2x4 gen() 30459 MB/s Jan 30 13:48:01.255692 kernel: raid6: avx2x2 gen() 31346 MB/s Jan 30 13:48:01.272772 kernel: raid6: avx2x1 gen() 25927 MB/s Jan 30 13:48:01.272793 kernel: raid6: using algorithm avx2x2 gen() 31346 MB/s Jan 30 13:48:01.290774 kernel: raid6: .... xor() 19929 MB/s, rmw enabled Jan 30 13:48:01.290793 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:48:01.310687 kernel: xor: automatically using best checksumming function avx Jan 30 13:48:01.458694 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:48:01.469411 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:48:01.476825 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:48:01.490450 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 30 13:48:01.496013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:48:01.502844 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:48:01.514567 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 30 13:48:01.543158 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:48:01.556849 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:48:01.619874 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:48:01.630861 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:48:01.646846 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:48:01.651123 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:48:01.652696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:48:01.655497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:48:01.665704 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:48:01.700369 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:48:01.701839 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:48:01.701857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:48:01.701868 kernel: GPT:9289727 != 19775487 Jan 30 13:48:01.701878 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:48:01.701888 kernel: GPT:9289727 != 19775487 Jan 30 13:48:01.701898 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:48:01.701909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:48:01.701926 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:48:01.701936 kernel: libata version 3.00 loaded. Jan 30 13:48:01.701947 kernel: AES CTR mode by8 optimization enabled Jan 30 13:48:01.668485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:48:01.691723 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:48:01.695255 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:48:01.695479 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:48:01.697197 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:48:01.713848 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:48:01.743272 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:48:01.743297 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:48:01.743455 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:48:01.743594 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (466) Jan 30 13:48:01.743607 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Jan 30 13:48:01.743617 kernel: scsi host0: ahci Jan 30 13:48:01.743823 kernel: scsi host1: ahci Jan 30 13:48:01.743971 kernel: scsi host2: ahci Jan 30 13:48:01.744121 kernel: scsi host3: ahci Jan 30 13:48:01.744304 kernel: scsi host4: ahci Jan 30 13:48:01.744451 kernel: scsi host5: ahci Jan 30 13:48:01.744591 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:48:01.744603 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:48:01.744613 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:48:01.744632 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:48:01.744646 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:48:01.744657 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:48:01.698545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:48:01.698817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:48:01.700740 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:48:01.718062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:48:01.735427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:48:01.766248 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:48:01.770240 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:48:01.779962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:48:01.789144 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:48:01.798340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:48:01.812944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:48:01.816905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:48:01.821160 disk-uuid[565]: Primary Header is updated. Jan 30 13:48:01.821160 disk-uuid[565]: Secondary Entries is updated. Jan 30 13:48:01.821160 disk-uuid[565]: Secondary Header is updated. Jan 30 13:48:01.825161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:48:01.830757 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:48:01.839011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:48:02.049090 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:48:02.049151 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:48:02.049162 kernel: ata3.00: applying bridge limits Jan 30 13:48:02.049843 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:48:02.050685 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:48:02.050709 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:48:02.051698 kernel: ata3.00: configured for UDMA/100 Jan 30 13:48:02.052691 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:48:02.057689 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:48:02.057711 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:48:02.092700 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:48:02.106401 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:48:02.106415 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:48:02.831156 disk-uuid[567]: The operation has completed successfully. Jan 30 13:48:02.832707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:48:02.851576 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:48:02.851735 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:48:02.881870 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:48:02.886973 sh[591]: Success Jan 30 13:48:02.898690 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:48:02.928889 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:48:02.946072 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:48:02.950862 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:48:02.959946 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:48:02.959973 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:48:02.959983 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:48:02.960964 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:48:02.962683 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:48:02.966368 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:48:02.966984 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:48:02.979823 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:48:02.982019 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:48:02.989810 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:48:02.989854 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:48:02.989865 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:48:02.992702 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:48:03.001154 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:48:03.002897 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:48:03.012439 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:48:03.019818 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:48:03.070638 ignition[684]: Ignition 2.19.0 Jan 30 13:48:03.070997 ignition[684]: Stage: fetch-offline Jan 30 13:48:03.071033 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:48:03.071042 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:48:03.071137 ignition[684]: parsed url from cmdline: "" Jan 30 13:48:03.071140 ignition[684]: no config URL provided Jan 30 13:48:03.071146 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:48:03.071154 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:48:03.071179 ignition[684]: op(1): [started] loading QEMU firmware config module Jan 30 13:48:03.071184 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:48:03.079160 ignition[684]: op(1): [finished] loading QEMU firmware config module Jan 30 13:48:03.102731 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:48:03.110789 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:48:03.119431 ignition[684]: parsing config with SHA512: 2cd1ad4bd6caa869aba564e3519e99530b0981911466e4d6714890c5b3e0295a14e9c43c0813cf08d0434fc008d7f8a3ef249f10af40707e53f5d1500bc19da3 Jan 30 13:48:03.124067 unknown[684]: fetched base config from "system" Jan 30 13:48:03.124767 ignition[684]: fetch-offline: fetch-offline passed Jan 30 13:48:03.124099 unknown[684]: fetched user config from "qemu" Jan 30 13:48:03.124878 ignition[684]: Ignition finished successfully Jan 30 13:48:03.127237 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:48:03.133346 systemd-networkd[780]: lo: Link UP Jan 30 13:48:03.133358 systemd-networkd[780]: lo: Gained carrier Jan 30 13:48:03.134905 systemd-networkd[780]: Enumeration completed Jan 30 13:48:03.134994 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:48:03.135327 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:48:03.135331 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:48:03.136391 systemd-networkd[780]: eth0: Link UP Jan 30 13:48:03.136395 systemd-networkd[780]: eth0: Gained carrier Jan 30 13:48:03.136401 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:48:03.136633 systemd[1]: Reached target network.target - Network. Jan 30 13:48:03.138203 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:48:03.149729 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:48:03.149821 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:48:03.162100 ignition[783]: Ignition 2.19.0 Jan 30 13:48:03.162111 ignition[783]: Stage: kargs Jan 30 13:48:03.162277 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:48:03.162289 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:48:03.163170 ignition[783]: kargs: kargs passed Jan 30 13:48:03.166849 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:48:03.163213 ignition[783]: Ignition finished successfully Jan 30 13:48:03.179864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:48:03.195005 ignition[792]: Ignition 2.19.0 Jan 30 13:48:03.195016 ignition[792]: Stage: disks Jan 30 13:48:03.195217 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:48:03.195231 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:48:03.196193 ignition[792]: disks: disks passed Jan 30 13:48:03.196241 ignition[792]: Ignition finished successfully Jan 30 13:48:03.199754 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:48:03.200244 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:48:03.203022 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:48:03.203237 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:48:03.203572 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:48:03.204089 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:48:03.218827 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:48:03.231317 systemd-resolved[255]: Detected conflict on linux IN A 10.0.0.138 Jan 30 13:48:03.231333 systemd-resolved[255]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 30 13:48:03.232941 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:48:03.239457 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:48:03.250806 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:48:03.349694 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:48:03.350711 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:48:03.353143 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:48:03.372754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:48:03.375278 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:48:03.377832 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:48:03.377879 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:48:03.386901 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Jan 30 13:48:03.386925 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:48:03.386940 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:48:03.386953 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:48:03.377901 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:48:03.388941 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:48:03.390288 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:48:03.392226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:48:03.395732 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:48:03.430249 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:48:03.434358 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:48:03.438229 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:48:03.441656 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:48:03.532933 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:48:03.551770 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:48:03.554306 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:48:03.560723 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:48:03.577420 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:48:03.582970 ignition[924]: INFO : Ignition 2.19.0 Jan 30 13:48:03.582970 ignition[924]: INFO : Stage: mount Jan 30 13:48:03.584585 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:48:03.584585 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:48:03.584585 ignition[924]: INFO : mount: mount passed Jan 30 13:48:03.584585 ignition[924]: INFO : Ignition finished successfully Jan 30 13:48:03.590631 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:48:03.599771 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:48:03.959850 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:48:03.973934 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:48:03.982461 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Jan 30 13:48:03.982489 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:48:03.982500 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:48:03.983950 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:48:03.986684 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:48:03.988683 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:48:04.011083 ignition[954]: INFO : Ignition 2.19.0 Jan 30 13:48:04.011083 ignition[954]: INFO : Stage: files Jan 30 13:48:04.012780 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:48:04.012780 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:48:04.012780 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:48:04.016313 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:48:04.016313 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:48:04.016313 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:48:04.020235 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:48:04.021830 unknown[954]: wrote ssh authorized keys file for user: core Jan 30 13:48:04.023082 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:48:04.024712 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:48:04.024712 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:48:04.061091 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:48:04.131342 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:48:04.133587 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:48:04.463779 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:48:04.887907 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:48:04.887907 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:48:04.891760 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:48:04.893941 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:48:04.893941 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:48:04.897064 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:48:04.897064 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:48:04.900287 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:48:04.900287 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:48:04.900287 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:48:04.925361 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:48:04.929689 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:48:04.931372 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:48:04.931372 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:48:04.934212 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:48:04.935741 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:48:04.937569 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:48:04.939397 ignition[954]: INFO : files: files passed Jan 30 13:48:04.940239 ignition[954]: INFO : Ignition finished successfully Jan 30 13:48:04.943921 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:48:04.955946 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:48:04.958990 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:48:04.961690 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:48:04.961810 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:48:04.967722 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:48:04.971471 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:48:04.971471 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:48:04.974709 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:48:04.977493 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:48:04.977779 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:48:04.990797 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:48:05.014316 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:48:05.014434 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:48:05.016824 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:48:05.018909 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:48:05.020869 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:48:05.021642 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:48:05.039754 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:48:05.045799 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:48:05.057343 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:48:05.057927 systemd-networkd[780]: eth0: Gained IPv6LL Jan 30 13:48:05.058927 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:48:05.061302 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:48:05.062388 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:48:05.062497 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:48:05.064893 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:48:05.066637 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:48:05.068420 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:48:05.070483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:48:05.072791 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:48:05.074901 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:48:05.076863 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:48:05.079082 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:48:05.081247 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:48:05.083443 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:48:05.085249 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:48:05.085360 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:48:05.087952 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:48:05.089622 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:48:05.091758 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:48:05.091899 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:48:05.093767 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:48:05.093874 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:48:05.096144 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:48:05.096252 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:48:05.098023 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:48:05.099870 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:48:05.103750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:48:05.105726 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:48:05.107648 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:48:05.109621 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:48:05.109747 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:48:05.111714 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:48:05.111805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:48:05.113579 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:48:05.113709 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:48:05.115899 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:48:05.116057 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:48:05.138950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:48:05.140924 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:48:05.141116 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:48:05.144497 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:48:05.146418 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:48:05.146588 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:48:05.148857 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:48:05.149028 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:48:05.153474 ignition[1008]: INFO : Ignition 2.19.0 Jan 30 13:48:05.153474 ignition[1008]: INFO : Stage: umount Jan 30 13:48:05.166469 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:48:05.166469 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:48:05.166469 ignition[1008]: INFO : umount: umount passed Jan 30 13:48:05.166469 ignition[1008]: INFO : Ignition finished successfully Jan 30 13:48:05.164040 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:48:05.164162 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:48:05.167789 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:48:05.167913 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:48:05.170865 systemd[1]: Stopped target network.target - Network. Jan 30 13:48:05.172489 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:48:05.172561 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:48:05.174618 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:48:05.174709 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:48:05.176555 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:48:05.176600 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:48:05.178512 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:48:05.178570 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:48:05.181740 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:48:05.183705 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:48:05.184701 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 30 13:48:05.187178 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:48:05.187306 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:48:05.187762 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:48:05.187803 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:48:05.196828 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:48:05.197782 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:48:05.197856 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:48:05.200222 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:48:05.203063 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:48:05.203204 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:48:05.215757 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:48:05.215842 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:48:05.219055 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:48:05.219114 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:48:05.221280 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:48:05.221341 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:48:05.223957 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:48:05.224476 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:48:05.224659 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:48:05.226254 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:48:05.226365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:48:05.229524 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:48:05.229603 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:48:05.231453 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:48:05.231494 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:48:05.232961 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:48:05.233010 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:48:05.233661 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:48:05.233718 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:48:05.234155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:48:05.234197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:48:05.239829 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:48:05.241251 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:48:05.241306 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:48:05.243561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:48:05.243607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:48:05.246609 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:48:05.246742 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:48:05.372406 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:48:05.372566 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:48:05.374746 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:48:05.376489 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:48:05.376554 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:48:05.385847 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:48:05.393649 systemd[1]: Switching root. Jan 30 13:48:05.423274 systemd-journald[192]: Journal stopped Jan 30 13:48:06.409618 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 13:48:06.409697 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:48:06.409711 kernel: SELinux: policy capability open_perms=1 Jan 30 13:48:06.409722 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:48:06.409737 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:48:06.409748 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:48:06.409759 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:48:06.409769 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:48:06.409780 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:48:06.409796 kernel: audit: type=1403 audit(1738244885.711:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:48:06.409809 systemd[1]: Successfully loaded SELinux policy in 40.486ms. Jan 30 13:48:06.409832 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.564ms. Jan 30 13:48:06.409845 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:48:06.409859 systemd[1]: Detected virtualization kvm. Jan 30 13:48:06.409871 systemd[1]: Detected architecture x86-64. Jan 30 13:48:06.409882 systemd[1]: Detected first boot. Jan 30 13:48:06.409894 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:48:06.409906 zram_generator::config[1052]: No configuration found. Jan 30 13:48:06.409918 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:48:06.409930 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:48:06.409941 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:48:06.409955 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:48:06.409968 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:48:06.409979 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:48:06.409995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:48:06.410007 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:48:06.410018 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:48:06.410030 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:48:06.410042 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:48:06.410056 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:48:06.410067 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:48:06.410080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:48:06.410092 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:48:06.410103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:48:06.410115 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:48:06.410127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:48:06.410139 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:48:06.410150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:48:06.410165 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:48:06.410176 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:48:06.410189 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:48:06.410200 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:48:06.410212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:48:06.410224 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:48:06.410236 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:48:06.410247 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:48:06.410262 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:48:06.410273 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:48:06.410285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:48:06.410297 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:48:06.410309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:48:06.410320 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:48:06.410333 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:48:06.410345 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:48:06.410357 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:48:06.410371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:48:06.410383 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:48:06.410395 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:48:06.410406 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:48:06.410418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:48:06.410430 systemd[1]: Reached target machines.target - Containers. Jan 30 13:48:06.410442 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:48:06.410454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:48:06.410468 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:48:06.410480 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:48:06.410492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:48:06.410503 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:48:06.410522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:48:06.410533 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:48:06.410545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:48:06.410557 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:48:06.410573 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:48:06.410587 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:48:06.410599 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:48:06.410611 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:48:06.410623 kernel: fuse: init (API version 7.39) Jan 30 13:48:06.410634 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:48:06.410645 kernel: loop: module loaded Jan 30 13:48:06.410657 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:48:06.410681 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:48:06.410693 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:48:06.410707 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:48:06.410735 systemd-journald[1129]: Collecting audit messages is disabled. Jan 30 13:48:06.410759 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:48:06.410772 systemd[1]: Stopped verity-setup.service. Jan 30 13:48:06.410788 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:48:06.410800 systemd-journald[1129]: Journal started Jan 30 13:48:06.410823 systemd-journald[1129]: Runtime Journal (/run/log/journal/59bf2c08464949e6bb46afaf44e68e6b) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:48:06.196733 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:48:06.217154 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:48:06.217604 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:48:06.415859 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:48:06.415894 kernel: ACPI: bus type drm_connector registered Jan 30 13:48:06.417232 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:48:06.418380 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:48:06.419577 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:48:06.420831 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:48:06.422009 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:48:06.423198 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:48:06.424406 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:48:06.425852 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:48:06.427365 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:48:06.427540 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:48:06.428986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:48:06.429152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:48:06.430557 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:48:06.430741 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:48:06.432068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:48:06.432228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:48:06.433735 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:48:06.433896 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:48:06.435249 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:48:06.435413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:48:06.436767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:48:06.438128 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:48:06.439606 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:48:06.451897 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:48:06.457776 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:48:06.460063 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:48:06.461205 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:48:06.461235 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:48:06.463187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:48:06.465899 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:48:06.469790 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:48:06.471141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:48:06.472862 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:48:06.476589 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:48:06.477817 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:48:06.479899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:48:06.481120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:48:06.484808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:48:06.487080 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:48:06.494334 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:48:06.497863 systemd-journald[1129]: Time spent on flushing to /var/log/journal/59bf2c08464949e6bb46afaf44e68e6b is 17.878ms for 992 entries. Jan 30 13:48:06.497863 systemd-journald[1129]: System Journal (/var/log/journal/59bf2c08464949e6bb46afaf44e68e6b) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:48:06.530918 systemd-journald[1129]: Received client request to flush runtime journal. Jan 30 13:48:06.530953 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:48:06.499122 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:48:06.500730 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:48:06.503362 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:48:06.505954 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:48:06.507751 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:48:06.513540 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:48:06.527742 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:48:06.530305 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:48:06.534430 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:48:06.536284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:48:06.552859 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:48:06.559359 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:48:06.570065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:48:06.572606 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:48:06.573980 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:48:06.577152 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:48:06.580699 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:48:06.592639 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 30 13:48:06.592677 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 30 13:48:06.599791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:48:06.616689 kernel: loop2: detected capacity change from 0 to 205544 Jan 30 13:48:06.652704 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 13:48:06.664713 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:48:06.675710 kernel: loop5: detected capacity change from 0 to 205544 Jan 30 13:48:06.682191 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:48:06.682782 (sd-merge)[1193]: Merged extensions into '/usr'. Jan 30 13:48:06.689577 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:48:06.689595 systemd[1]: Reloading... Jan 30 13:48:06.750694 zram_generator::config[1219]: No configuration found. Jan 30 13:48:06.808845 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:48:06.886802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:48:06.935283 systemd[1]: Reloading finished in 245 ms. Jan 30 13:48:06.969361 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:48:06.970924 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:48:06.982913 systemd[1]: Starting ensure-sysext.service... Jan 30 13:48:06.985529 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:48:06.990941 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:48:06.990952 systemd[1]: Reloading... Jan 30 13:48:07.009692 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:48:07.010053 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:48:07.011039 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:48:07.011331 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 30 13:48:07.011411 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 30 13:48:07.014606 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:48:07.014618 systemd-tmpfiles[1257]: Skipping /boot Jan 30 13:48:07.027449 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:48:07.027463 systemd-tmpfiles[1257]: Skipping /boot Jan 30 13:48:07.048707 zram_generator::config[1286]: No configuration found. Jan 30 13:48:07.149102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:48:07.198261 systemd[1]: Reloading finished in 206 ms. Jan 30 13:48:07.215817 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:48:07.228062 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:48:07.236846 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:48:07.239370 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:48:07.241841 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:48:07.246739 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:48:07.250560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:48:07.253135 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:48:07.256404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:48:07.256581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:48:07.261729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:48:07.264913 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:48:07.268990 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:48:07.270892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:48:07.271046 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:48:07.272003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:48:07.273067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:48:07.275013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:48:07.275477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:48:07.278805 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:48:07.279113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:48:07.291344 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:48:07.294016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:48:07.294954 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:48:07.297288 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 30 13:48:07.303999 augenrules[1351]: No rules Jan 30 13:48:07.304052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:48:07.306609 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:48:07.309793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:48:07.311047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:48:07.314103 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:48:07.315251 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:48:07.317288 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:48:07.319533 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:48:07.321443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:48:07.321868 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:48:07.323410 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:48:07.325250 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:48:07.327104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:48:07.327387 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:48:07.329135 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:48:07.329301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:48:07.349708 systemd[1]: Finished ensure-sysext.service. Jan 30 13:48:07.359607 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:48:07.359804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:48:07.359947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:48:07.365082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:48:07.367693 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1365) Jan 30 13:48:07.373329 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:48:07.375886 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:48:07.380796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:48:07.381961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:48:07.385581 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:48:07.395812 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:48:07.400805 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:48:07.403837 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:48:07.403862 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:48:07.404416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:48:07.404606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:48:07.406172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:48:07.406342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:48:07.414189 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:48:07.414401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:48:07.417501 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:48:07.419608 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:48:07.419858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:48:07.421903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:48:07.442952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:48:07.449716 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:48:07.459638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:48:07.470688 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:48:07.471834 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:48:07.473509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:48:07.473695 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:48:07.487189 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:48:07.529897 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:48:07.530072 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:48:07.530253 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:48:07.517765 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:48:07.531571 systemd-networkd[1392]: lo: Link UP Jan 30 13:48:07.531917 systemd-networkd[1392]: lo: Gained carrier Jan 30 13:48:07.533654 systemd-networkd[1392]: Enumeration completed Jan 30 13:48:07.533885 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:48:07.534405 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:48:07.534572 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:48:07.537908 systemd-networkd[1392]: eth0: Link UP Jan 30 13:48:07.537917 systemd-networkd[1392]: eth0: Gained carrier Jan 30 13:48:07.537929 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:48:07.544879 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:48:07.552990 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:48:07.555164 systemd-resolved[1326]: Positive Trust Anchors: Jan 30 13:48:07.555183 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:48:07.555214 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:48:07.559558 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:48:07.561876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:48:07.563131 systemd-resolved[1326]: Defaulting to hostname 'linux'. Jan 30 13:48:07.566603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:48:07.566978 systemd[1]: Reached target network.target - Network. Jan 30 13:48:07.568779 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:48:07.573019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:48:07.573305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:48:07.576420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:48:07.579129 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:48:07.581690 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:48:07.580449 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Jan 30 13:48:08.169091 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:48:08.169260 systemd-timesyncd[1394]: Initial clock synchronization to Thu 2025-01-30 13:48:08.168871 UTC. Jan 30 13:48:08.169304 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 30 13:48:08.191539 kernel: kvm_amd: TSC scaling supported Jan 30 13:48:08.191587 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:48:08.191600 kernel: kvm_amd: Nested Paging enabled Jan 30 13:48:08.191612 kernel: kvm_amd: LBR virtualization supported Jan 30 13:48:08.192641 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:48:08.192711 kernel: kvm_amd: Virtual GIF supported Jan 30 13:48:08.213252 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:48:08.241982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:48:08.253400 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:48:08.264432 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:48:08.272654 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:48:08.304409 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:48:08.306998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:48:08.308205 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:48:08.309474 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:48:08.310781 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:48:08.312276 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:48:08.313597 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:48:08.314888 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:48:08.316157 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:48:08.316182 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:48:08.317087 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:48:08.318594 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:48:08.321295 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:48:08.333540 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:48:08.335898 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:48:08.337543 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:48:08.338696 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:48:08.339668 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:48:08.340630 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:48:08.340658 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:48:08.341611 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:48:08.343678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:48:08.347411 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:48:08.350784 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:48:08.351903 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:48:08.353234 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:48:08.353637 jq[1440]: false Jan 30 13:48:08.356303 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:48:08.358631 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:48:08.360298 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:48:08.362516 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:48:08.367994 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:48:08.369557 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:48:08.369972 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:48:08.372013 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:48:08.375285 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:48:08.379733 extend-filesystems[1441]: Found loop3 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found loop4 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found loop5 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found sr0 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found vda Jan 30 13:48:08.379733 extend-filesystems[1441]: Found vda1 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found vda2 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found vda3 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found usr Jan 30 13:48:08.379733 extend-filesystems[1441]: Found vda4 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found vda6 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found vda7 Jan 30 13:48:08.379733 extend-filesystems[1441]: Found vda9 Jan 30 13:48:08.379733 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 30 13:48:08.385889 dbus-daemon[1439]: [system] SELinux support is enabled Jan 30 13:48:08.385537 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:48:08.386383 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:48:08.386570 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:48:08.396187 update_engine[1450]: I20250130 13:48:08.396078 1450 main.cc:92] Flatcar Update Engine starting Jan 30 13:48:08.399406 update_engine[1450]: I20250130 13:48:08.397748 1450 update_check_scheduler.cc:74] Next update check in 11m19s Jan 30 13:48:08.399770 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:48:08.401525 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:48:08.402606 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:48:08.406768 jq[1454]: true Jan 30 13:48:08.407260 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:48:08.407495 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:48:08.419601 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 30 13:48:08.420881 jq[1462]: true Jan 30 13:48:08.421426 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:48:08.427080 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:48:08.432030 tar[1461]: linux-amd64/helm Jan 30 13:48:08.432254 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:48:08.433466 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:48:08.435021 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:48:08.435041 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:48:08.437034 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:48:08.437054 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:48:08.439091 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:48:08.449613 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1372) Jan 30 13:48:08.446329 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:48:08.447684 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:48:08.447704 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:48:08.451442 systemd-logind[1449]: New seat seat0. Jan 30 13:48:08.459058 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:48:08.473227 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:48:08.485511 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:48:08.485511 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:48:08.485511 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:48:08.489336 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 30 13:48:08.487667 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:48:08.487909 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:48:08.495059 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:48:08.499607 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:48:08.502676 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:48:08.527778 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:48:08.620462 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:48:08.634873 containerd[1463]: time="2025-01-30T13:48:08.634677567Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:48:08.651605 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:48:08.660440 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:48:08.660903 containerd[1463]: time="2025-01-30T13:48:08.660859569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:48:08.663118 containerd[1463]: time="2025-01-30T13:48:08.663084230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:48:08.663204 containerd[1463]: time="2025-01-30T13:48:08.663187203Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:48:08.663272 containerd[1463]: time="2025-01-30T13:48:08.663257525Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:48:08.663543 containerd[1463]: time="2025-01-30T13:48:08.663522021Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:48:08.663623 containerd[1463]: time="2025-01-30T13:48:08.663606950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:48:08.664158 containerd[1463]: time="2025-01-30T13:48:08.663912964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:48:08.664158 containerd[1463]: time="2025-01-30T13:48:08.663934384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:48:08.664297 containerd[1463]: time="2025-01-30T13:48:08.664274211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:48:08.664373 containerd[1463]: time="2025-01-30T13:48:08.664354291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:48:08.664443 containerd[1463]: time="2025-01-30T13:48:08.664424573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:48:08.664503 containerd[1463]: time="2025-01-30T13:48:08.664487030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:48:08.664677 containerd[1463]: time="2025-01-30T13:48:08.664659363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:48:08.665022 containerd[1463]: time="2025-01-30T13:48:08.665000673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:48:08.665244 containerd[1463]: time="2025-01-30T13:48:08.665221377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:48:08.665309 containerd[1463]: time="2025-01-30T13:48:08.665294394Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:48:08.665479 containerd[1463]: time="2025-01-30T13:48:08.665461708Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:48:08.665602 containerd[1463]: time="2025-01-30T13:48:08.665583666Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:48:08.665865 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:59566.service - OpenSSH per-connection server daemon (10.0.0.1:59566). Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673104598Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673177735Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673196621Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673214865Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673233580Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673377389Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673622379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673740090Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673758254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673775105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673791867Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673814459Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673828365Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:48:08.675128 containerd[1463]: time="2025-01-30T13:48:08.673843493Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:48:08.673812 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.673861407Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.673877317Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.673890692Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.673904438Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.673928022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.673954912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.673969991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.673985259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.674001249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.674017420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.674032999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.674048398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.674063175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675571 containerd[1463]: time="2025-01-30T13:48:08.674083253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.674104 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674096969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674111446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674152964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674176748Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674202887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674218967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674234386Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674286204Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674307363Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674321410Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674336478Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674349402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674364440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:48:08.675995 containerd[1463]: time="2025-01-30T13:48:08.674383656Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:48:08.676333 containerd[1463]: time="2025-01-30T13:48:08.674396019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:48:08.676368 containerd[1463]: time="2025-01-30T13:48:08.674701833Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:48:08.676368 containerd[1463]: time="2025-01-30T13:48:08.674766684Z" level=info msg="Connect containerd service" Jan 30 13:48:08.676368 containerd[1463]: time="2025-01-30T13:48:08.674816267Z" level=info msg="using legacy CRI server" Jan 30 13:48:08.676368 containerd[1463]: time="2025-01-30T13:48:08.674825394Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:48:08.676368 containerd[1463]: time="2025-01-30T13:48:08.674934359Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:48:08.679660 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:48:08.683156 containerd[1463]: time="2025-01-30T13:48:08.683092846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683324190Z" level=info msg="Start subscribing containerd event" Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683412044Z" level=info msg="Start recovering state" Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683488798Z" level=info msg="Start event monitor" Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683502143Z" level=info msg="Start snapshots syncer" Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683513435Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683522572Z" level=info msg="Start streaming server" Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683593715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683656282Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:48:08.686158 containerd[1463]: time="2025-01-30T13:48:08.683718329Z" level=info msg="containerd successfully booted in 0.050187s" Jan 30 13:48:08.683805 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:48:08.708273 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:48:08.719464 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:48:08.721543 sshd[1519]: Accepted publickey for core from 10.0.0.1 port 59566 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:08.722583 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:48:08.724351 sshd[1519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:08.724613 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:48:08.735439 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:48:08.744573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:48:08.748281 systemd-logind[1449]: New session 1 of user core. Jan 30 13:48:08.757030 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:48:08.770451 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:48:08.774321 (systemd)[1532]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:48:08.864510 tar[1461]: linux-amd64/LICENSE Jan 30 13:48:08.864628 tar[1461]: linux-amd64/README.md Jan 30 13:48:08.881620 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:48:08.895006 systemd[1532]: Queued start job for default target default.target. Jan 30 13:48:08.903401 systemd[1532]: Created slice app.slice - User Application Slice. Jan 30 13:48:08.903426 systemd[1532]: Reached target paths.target - Paths. Jan 30 13:48:08.903440 systemd[1532]: Reached target timers.target - Timers. Jan 30 13:48:08.904950 systemd[1532]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:48:08.916536 systemd[1532]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:48:08.916660 systemd[1532]: Reached target sockets.target - Sockets. Jan 30 13:48:08.916679 systemd[1532]: Reached target basic.target - Basic System. Jan 30 13:48:08.916720 systemd[1532]: Reached target default.target - Main User Target. Jan 30 13:48:08.916754 systemd[1532]: Startup finished in 133ms. Jan 30 13:48:08.917167 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:48:08.919822 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:48:08.991400 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:59580.service - OpenSSH per-connection server daemon (10.0.0.1:59580). Jan 30 13:48:09.026288 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 59580 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:09.027947 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:09.032014 systemd-logind[1449]: New session 2 of user core. Jan 30 13:48:09.051383 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:48:09.105973 sshd[1546]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:09.114995 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:59580.service: Deactivated successfully. Jan 30 13:48:09.116649 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:48:09.118128 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:48:09.119360 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:59586.service - OpenSSH per-connection server daemon (10.0.0.1:59586). Jan 30 13:48:09.121375 systemd-logind[1449]: Removed session 2. Jan 30 13:48:09.155459 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 59586 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:09.157044 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:09.160974 systemd-logind[1449]: New session 3 of user core. Jan 30 13:48:09.174238 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:48:09.229963 sshd[1553]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:09.233545 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:59586.service: Deactivated successfully. Jan 30 13:48:09.235378 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:48:09.235996 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:48:09.236931 systemd-logind[1449]: Removed session 3. Jan 30 13:48:09.291295 systemd-networkd[1392]: eth0: Gained IPv6LL Jan 30 13:48:09.294417 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:48:09.296209 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:48:09.307332 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:48:09.309794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:09.312018 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:48:09.330880 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:48:09.331165 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:48:09.333152 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:48:09.337607 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:48:09.923975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:09.925698 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:48:09.926954 systemd[1]: Startup finished in 750ms (kernel) + 4.979s (initrd) + 3.668s (userspace) = 9.397s. Jan 30 13:48:09.929650 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:10.330283 kubelet[1581]: E0130 13:48:10.330110 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:10.334596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:10.334872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:19.240675 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:53804.service - OpenSSH per-connection server daemon (10.0.0.1:53804). Jan 30 13:48:19.279966 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 53804 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:19.281597 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:19.285420 systemd-logind[1449]: New session 4 of user core. Jan 30 13:48:19.301301 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:48:19.354597 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:19.367626 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:53804.service: Deactivated successfully. Jan 30 13:48:19.369395 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:48:19.370854 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:48:19.377379 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:53812.service - OpenSSH per-connection server daemon (10.0.0.1:53812). Jan 30 13:48:19.378318 systemd-logind[1449]: Removed session 4. Jan 30 13:48:19.410481 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 53812 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:19.411910 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:19.416033 systemd-logind[1449]: New session 5 of user core. Jan 30 13:48:19.424258 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:48:19.473608 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:19.486024 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:53812.service: Deactivated successfully. Jan 30 13:48:19.487950 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:48:19.489420 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:48:19.490804 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:53826.service - OpenSSH per-connection server daemon (10.0.0.1:53826). Jan 30 13:48:19.491834 systemd-logind[1449]: Removed session 5. Jan 30 13:48:19.527150 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 53826 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:19.528907 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:19.532918 systemd-logind[1449]: New session 6 of user core. Jan 30 13:48:19.543420 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:48:19.601576 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:19.609116 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:53826.service: Deactivated successfully. Jan 30 13:48:19.610978 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:48:19.612495 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:48:19.613842 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:53828.service - OpenSSH per-connection server daemon (10.0.0.1:53828). Jan 30 13:48:19.614700 systemd-logind[1449]: Removed session 6. Jan 30 13:48:19.652630 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 53828 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:19.654430 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:19.658724 systemd-logind[1449]: New session 7 of user core. Jan 30 13:48:19.668255 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:48:19.725745 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:48:19.726090 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:48:19.743107 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 30 13:48:19.744861 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:19.755735 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:53828.service: Deactivated successfully. Jan 30 13:48:19.757231 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:48:19.758578 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:48:19.768370 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:53842.service - OpenSSH per-connection server daemon (10.0.0.1:53842). Jan 30 13:48:19.769185 systemd-logind[1449]: Removed session 7. Jan 30 13:48:19.799570 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 53842 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:19.801037 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:19.804711 systemd-logind[1449]: New session 8 of user core. Jan 30 13:48:19.821376 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:48:19.877719 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:48:19.878205 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:48:19.882920 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 30 13:48:19.890359 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:48:19.890769 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:48:19.915416 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:48:19.917097 auditctl[1630]: No rules Jan 30 13:48:19.918335 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:48:19.918603 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:48:19.920500 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:48:19.951971 augenrules[1648]: No rules Jan 30 13:48:19.953648 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:48:19.954986 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 30 13:48:19.957012 sshd[1623]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:19.972682 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:53842.service: Deactivated successfully. Jan 30 13:48:19.974754 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:48:19.976639 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:48:19.986531 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:53858.service - OpenSSH per-connection server daemon (10.0.0.1:53858). Jan 30 13:48:19.987781 systemd-logind[1449]: Removed session 8. Jan 30 13:48:20.019028 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 53858 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:48:20.020805 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:20.025165 systemd-logind[1449]: New session 9 of user core. Jan 30 13:48:20.038348 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:48:20.094414 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:48:20.094871 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:48:20.376957 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:48:20.394393 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:48:20.394592 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:48:20.395741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:20.568310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:20.573285 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:20.613310 kubelet[1691]: E0130 13:48:20.613248 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:20.619615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:20.619868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:20.681194 dockerd[1677]: time="2025-01-30T13:48:20.681000560Z" level=info msg="Starting up" Jan 30 13:48:21.018740 dockerd[1677]: time="2025-01-30T13:48:21.018583478Z" level=info msg="Loading containers: start." Jan 30 13:48:21.182179 kernel: Initializing XFRM netlink socket Jan 30 13:48:21.265302 systemd-networkd[1392]: docker0: Link UP Jan 30 13:48:21.306386 dockerd[1677]: time="2025-01-30T13:48:21.306325534Z" level=info msg="Loading containers: done." Jan 30 13:48:21.321667 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3241204698-merged.mount: Deactivated successfully. Jan 30 13:48:21.324095 dockerd[1677]: time="2025-01-30T13:48:21.324041672Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:48:21.324199 dockerd[1677]: time="2025-01-30T13:48:21.324171225Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:48:21.324312 dockerd[1677]: time="2025-01-30T13:48:21.324285158Z" level=info msg="Daemon has completed initialization" Jan 30 13:48:21.368357 dockerd[1677]: time="2025-01-30T13:48:21.368275901Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:48:21.368625 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:48:22.101011 containerd[1463]: time="2025-01-30T13:48:22.100968309Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:48:22.659084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089460036.mount: Deactivated successfully. Jan 30 13:48:23.484465 containerd[1463]: time="2025-01-30T13:48:23.484402608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:23.485209 containerd[1463]: time="2025-01-30T13:48:23.485153215Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 30 13:48:23.486374 containerd[1463]: time="2025-01-30T13:48:23.486344960Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:23.489546 containerd[1463]: time="2025-01-30T13:48:23.489485659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:23.490592 containerd[1463]: time="2025-01-30T13:48:23.490522082Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.389499401s" Jan 30 13:48:23.490640 containerd[1463]: time="2025-01-30T13:48:23.490611500Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:48:23.491992 containerd[1463]: time="2025-01-30T13:48:23.491953726Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:48:24.616564 containerd[1463]: time="2025-01-30T13:48:24.616506993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:24.617417 containerd[1463]: time="2025-01-30T13:48:24.617343341Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 30 13:48:24.618701 containerd[1463]: time="2025-01-30T13:48:24.618667705Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:24.621549 containerd[1463]: time="2025-01-30T13:48:24.621506938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:24.622737 containerd[1463]: time="2025-01-30T13:48:24.622695567Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.130707386s" Jan 30 13:48:24.622784 containerd[1463]: time="2025-01-30T13:48:24.622734881Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:48:24.623245 containerd[1463]: time="2025-01-30T13:48:24.623193250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:48:25.812678 containerd[1463]: time="2025-01-30T13:48:25.812609465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:25.813389 containerd[1463]: time="2025-01-30T13:48:25.813357057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 30 13:48:25.814644 containerd[1463]: time="2025-01-30T13:48:25.814611439Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:25.817412 containerd[1463]: time="2025-01-30T13:48:25.817350705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:25.818410 containerd[1463]: time="2025-01-30T13:48:25.818373773Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.195147101s" Jan 30 13:48:25.818410 containerd[1463]: time="2025-01-30T13:48:25.818413157Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:48:25.818998 containerd[1463]: time="2025-01-30T13:48:25.818849996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:48:26.942415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890383793.mount: Deactivated successfully. Jan 30 13:48:28.573544 containerd[1463]: time="2025-01-30T13:48:28.573483503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:28.588410 containerd[1463]: time="2025-01-30T13:48:28.588339478Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 30 13:48:28.613824 containerd[1463]: time="2025-01-30T13:48:28.613745324Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:28.624639 containerd[1463]: time="2025-01-30T13:48:28.624561895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:28.625099 containerd[1463]: time="2025-01-30T13:48:28.625043819Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.806166531s" Jan 30 13:48:28.625099 containerd[1463]: time="2025-01-30T13:48:28.625090186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:48:28.625660 containerd[1463]: time="2025-01-30T13:48:28.625622694Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:48:29.264830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005501183.mount: Deactivated successfully. Jan 30 13:48:30.177500 containerd[1463]: time="2025-01-30T13:48:30.177432076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:30.192279 containerd[1463]: time="2025-01-30T13:48:30.192209744Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:48:30.203401 containerd[1463]: time="2025-01-30T13:48:30.203363758Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:30.219280 containerd[1463]: time="2025-01-30T13:48:30.219243813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:30.220772 containerd[1463]: time="2025-01-30T13:48:30.220704713Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.595039889s" Jan 30 13:48:30.220820 containerd[1463]: time="2025-01-30T13:48:30.220779362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:48:30.221302 containerd[1463]: time="2025-01-30T13:48:30.221250947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:48:30.801551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:48:30.811368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:30.962795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:30.967656 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:31.020507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591178228.mount: Deactivated successfully. Jan 30 13:48:31.028647 containerd[1463]: time="2025-01-30T13:48:31.028595147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:31.029343 containerd[1463]: time="2025-01-30T13:48:31.029312812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:48:31.030415 containerd[1463]: time="2025-01-30T13:48:31.030369524Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:31.032752 containerd[1463]: time="2025-01-30T13:48:31.032698801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:31.034088 containerd[1463]: time="2025-01-30T13:48:31.033482781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 812.202159ms" Jan 30 13:48:31.034088 containerd[1463]: time="2025-01-30T13:48:31.033518498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:48:31.034225 containerd[1463]: time="2025-01-30T13:48:31.034183064Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:48:31.041907 kubelet[1963]: E0130 13:48:31.041859 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:31.046035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:31.046269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:31.491603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301394678.mount: Deactivated successfully. Jan 30 13:48:34.417759 containerd[1463]: time="2025-01-30T13:48:34.417704155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:34.418627 containerd[1463]: time="2025-01-30T13:48:34.418589686Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 30 13:48:34.421865 containerd[1463]: time="2025-01-30T13:48:34.420192321Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:34.424217 containerd[1463]: time="2025-01-30T13:48:34.424187271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:34.425235 containerd[1463]: time="2025-01-30T13:48:34.425207895Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.390998542s" Jan 30 13:48:34.425235 containerd[1463]: time="2025-01-30T13:48:34.425231148Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:48:36.938420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:36.948362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:36.971994 systemd[1]: Reloading requested from client PID 2060 ('systemctl') (unit session-9.scope)... Jan 30 13:48:36.972016 systemd[1]: Reloading... Jan 30 13:48:37.052239 zram_generator::config[2099]: No configuration found. Jan 30 13:48:37.229750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:48:37.306579 systemd[1]: Reloading finished in 334 ms. Jan 30 13:48:37.355346 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:37.358209 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:48:37.358478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:37.360189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:37.516765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:37.521073 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:48:37.580805 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:37.580805 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:48:37.580805 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:37.581209 kubelet[2149]: I0130 13:48:37.580868 2149 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:48:38.198164 kubelet[2149]: I0130 13:48:38.198109 2149 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:48:38.198164 kubelet[2149]: I0130 13:48:38.198162 2149 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:48:38.198441 kubelet[2149]: I0130 13:48:38.198415 2149 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:48:38.221246 kubelet[2149]: I0130 13:48:38.221184 2149 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:48:38.222973 kubelet[2149]: E0130 13:48:38.222934 2149 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:38.230047 kubelet[2149]: E0130 13:48:38.230008 2149 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:48:38.230047 kubelet[2149]: I0130 13:48:38.230048 2149 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:48:38.236001 kubelet[2149]: I0130 13:48:38.235962 2149 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:48:38.237287 kubelet[2149]: I0130 13:48:38.237259 2149 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:48:38.237475 kubelet[2149]: I0130 13:48:38.237429 2149 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:48:38.237661 kubelet[2149]: I0130 13:48:38.237467 2149 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:48:38.237783 kubelet[2149]: I0130 13:48:38.237662 2149 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:48:38.237783 kubelet[2149]: I0130 13:48:38.237673 2149 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:48:38.237845 kubelet[2149]: I0130 13:48:38.237817 2149 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:38.239735 kubelet[2149]: I0130 13:48:38.239709 2149 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:48:38.239735 kubelet[2149]: I0130 13:48:38.239732 2149 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:48:38.239830 kubelet[2149]: I0130 13:48:38.239776 2149 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:48:38.239830 kubelet[2149]: I0130 13:48:38.239797 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:48:38.243004 kubelet[2149]: W0130 13:48:38.242907 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 30 13:48:38.243004 kubelet[2149]: E0130 13:48:38.242964 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:38.243170 kubelet[2149]: W0130 13:48:38.243112 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 30 13:48:38.243217 kubelet[2149]: E0130 13:48:38.243181 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:38.245564 kubelet[2149]: I0130 13:48:38.245546 2149 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:48:38.247156 kubelet[2149]: I0130 13:48:38.247117 2149 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:48:38.247885 kubelet[2149]: W0130 13:48:38.247859 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:48:38.248792 kubelet[2149]: I0130 13:48:38.248682 2149 server.go:1269] "Started kubelet" Jan 30 13:48:38.249930 kubelet[2149]: I0130 13:48:38.249014 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:48:38.249930 kubelet[2149]: I0130 13:48:38.249361 2149 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:48:38.249930 kubelet[2149]: I0130 13:48:38.249410 2149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:48:38.249930 kubelet[2149]: I0130 13:48:38.249903 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:48:38.250078 kubelet[2149]: I0130 13:48:38.250008 2149 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:48:38.250968 kubelet[2149]: I0130 13:48:38.250225 2149 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:48:38.250968 kubelet[2149]: I0130 13:48:38.250866 2149 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:48:38.250968 kubelet[2149]: I0130 13:48:38.250920 2149 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:48:38.251073 kubelet[2149]: I0130 13:48:38.250983 2149 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:48:38.252147 kubelet[2149]: W0130 13:48:38.252039 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 30 13:48:38.254217 kubelet[2149]: E0130 13:48:38.252081 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:38.256304 kubelet[2149]: I0130 13:48:38.256282 2149 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:48:38.257189 kubelet[2149]: I0130 13:48:38.256453 2149 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:48:38.257189 kubelet[2149]: E0130 13:48:38.256470 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:38.257189 kubelet[2149]: I0130 13:48:38.256550 2149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:48:38.257189 kubelet[2149]: E0130 13:48:38.256555 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Jan 30 13:48:38.257189 kubelet[2149]: E0130 13:48:38.257047 2149 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:48:38.262049 kubelet[2149]: E0130 13:48:38.259536 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c8b24d6ab81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:48:38.248647553 +0000 UTC m=+0.719341686,LastTimestamp:2025-01-30 13:48:38.248647553 +0000 UTC m=+0.719341686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:48:38.272638 kubelet[2149]: I0130 13:48:38.272613 2149 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:48:38.272638 kubelet[2149]: I0130 13:48:38.272629 2149 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:48:38.272638 kubelet[2149]: I0130 13:48:38.272645 2149 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:38.276379 kubelet[2149]: I0130 13:48:38.276339 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:48:38.277943 kubelet[2149]: I0130 13:48:38.277887 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:48:38.277943 kubelet[2149]: I0130 13:48:38.277931 2149 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:48:38.277943 kubelet[2149]: I0130 13:48:38.277952 2149 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:48:38.278073 kubelet[2149]: E0130 13:48:38.277995 2149 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:48:38.278562 kubelet[2149]: W0130 13:48:38.278526 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 30 13:48:38.278619 kubelet[2149]: E0130 13:48:38.278564 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:38.279202 kubelet[2149]: I0130 13:48:38.279123 2149 policy_none.go:49] "None policy: Start" Jan 30 13:48:38.279714 kubelet[2149]: I0130 13:48:38.279696 2149 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:48:38.279753 kubelet[2149]: I0130 13:48:38.279722 2149 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:48:38.287545 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:48:38.301297 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:48:38.304854 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:48:38.326540 kubelet[2149]: I0130 13:48:38.326348 2149 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:48:38.326668 kubelet[2149]: I0130 13:48:38.326607 2149 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:48:38.326668 kubelet[2149]: I0130 13:48:38.326620 2149 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:48:38.326914 kubelet[2149]: I0130 13:48:38.326882 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:48:38.328174 kubelet[2149]: E0130 13:48:38.328118 2149 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:48:38.386089 systemd[1]: Created slice kubepods-burstable-pod9796a4f9b37fec3f883d0e11caeb02a0.slice - libcontainer container kubepods-burstable-pod9796a4f9b37fec3f883d0e11caeb02a0.slice. Jan 30 13:48:38.408994 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 30 13:48:38.427094 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 30 13:48:38.427896 kubelet[2149]: I0130 13:48:38.427857 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:48:38.428254 kubelet[2149]: E0130 13:48:38.428209 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 30 13:48:38.452520 kubelet[2149]: I0130 13:48:38.452414 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:38.452520 kubelet[2149]: I0130 13:48:38.452450 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:38.452520 kubelet[2149]: I0130 13:48:38.452480 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:38.452520 kubelet[2149]: I0130 13:48:38.452504 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:48:38.452693 kubelet[2149]: I0130 13:48:38.452526 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9796a4f9b37fec3f883d0e11caeb02a0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9796a4f9b37fec3f883d0e11caeb02a0\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:48:38.452693 kubelet[2149]: I0130 13:48:38.452546 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9796a4f9b37fec3f883d0e11caeb02a0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9796a4f9b37fec3f883d0e11caeb02a0\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:48:38.452693 kubelet[2149]: I0130 13:48:38.452564 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:38.452693 kubelet[2149]: I0130 13:48:38.452583 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:38.452693 kubelet[2149]: I0130 13:48:38.452603 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9796a4f9b37fec3f883d0e11caeb02a0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9796a4f9b37fec3f883d0e11caeb02a0\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:48:38.457760 kubelet[2149]: E0130 13:48:38.457714 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Jan 30 13:48:38.630252 kubelet[2149]: I0130 13:48:38.630203 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:48:38.630693 kubelet[2149]: E0130 13:48:38.630515 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 30 13:48:38.706612 kubelet[2149]: E0130 13:48:38.706476 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:38.707396 containerd[1463]: time="2025-01-30T13:48:38.707354939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9796a4f9b37fec3f883d0e11caeb02a0,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:38.711525 kubelet[2149]: E0130 13:48:38.711494 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:38.711983 containerd[1463]: time="2025-01-30T13:48:38.711938253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:38.735480 kubelet[2149]: E0130 13:48:38.735446 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:38.735954 containerd[1463]: time="2025-01-30T13:48:38.735917314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:38.858642 kubelet[2149]: E0130 13:48:38.858589 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Jan 30 13:48:39.032241 kubelet[2149]: I0130 13:48:39.032099 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:48:39.032883 kubelet[2149]: E0130 13:48:39.032826 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 30 13:48:39.057454 kubelet[2149]: W0130 13:48:39.057379 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 30 13:48:39.057454 kubelet[2149]: E0130 13:48:39.057460 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:39.196874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882156308.mount: Deactivated successfully. Jan 30 13:48:39.205738 kubelet[2149]: W0130 13:48:39.205700 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 30 13:48:39.205807 kubelet[2149]: E0130 13:48:39.205752 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:39.206400 containerd[1463]: time="2025-01-30T13:48:39.206363671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:39.209827 containerd[1463]: time="2025-01-30T13:48:39.209796427Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:48:39.210825 containerd[1463]: time="2025-01-30T13:48:39.210788307Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:39.211805 containerd[1463]: time="2025-01-30T13:48:39.211762794Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:39.213068 containerd[1463]: time="2025-01-30T13:48:39.213039939Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:39.213418 containerd[1463]: time="2025-01-30T13:48:39.213365479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:48:39.214385 containerd[1463]: time="2025-01-30T13:48:39.214331320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:48:39.215816 containerd[1463]: time="2025-01-30T13:48:39.215780828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:39.217614 containerd[1463]: time="2025-01-30T13:48:39.217578679Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 481.591194ms" Jan 30 13:48:39.218269 containerd[1463]: time="2025-01-30T13:48:39.218233937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.792696ms" Jan 30 13:48:39.220826 containerd[1463]: time="2025-01-30T13:48:39.220800149Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.777908ms" Jan 30 13:48:39.379870 kubelet[2149]: W0130 13:48:39.379780 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 30 13:48:39.379997 kubelet[2149]: E0130 13:48:39.379876 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:39.530164 containerd[1463]: time="2025-01-30T13:48:39.530003686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:39.530164 containerd[1463]: time="2025-01-30T13:48:39.530064560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:39.530164 containerd[1463]: time="2025-01-30T13:48:39.530077624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:39.530374 containerd[1463]: time="2025-01-30T13:48:39.530319057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:39.552934 containerd[1463]: time="2025-01-30T13:48:39.552516617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:39.552934 containerd[1463]: time="2025-01-30T13:48:39.552553326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:39.552934 containerd[1463]: time="2025-01-30T13:48:39.552563666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:39.552934 containerd[1463]: time="2025-01-30T13:48:39.552650689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:39.552934 containerd[1463]: time="2025-01-30T13:48:39.552438391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:39.552934 containerd[1463]: time="2025-01-30T13:48:39.552502090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:39.552934 containerd[1463]: time="2025-01-30T13:48:39.552517118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:39.552934 containerd[1463]: time="2025-01-30T13:48:39.552589965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:39.632395 systemd[1]: Started cri-containerd-228881d2029525b61233c28687c4a5e49421c81bbc0fc867be4b9ec72d7e76f0.scope - libcontainer container 228881d2029525b61233c28687c4a5e49421c81bbc0fc867be4b9ec72d7e76f0. Jan 30 13:48:39.637009 systemd[1]: Started cri-containerd-305edb91ce339971a71286646352640ab5f5c0741db893f9758f8cd9cdb67159.scope - libcontainer container 305edb91ce339971a71286646352640ab5f5c0741db893f9758f8cd9cdb67159. Jan 30 13:48:39.638550 systemd[1]: Started cri-containerd-b7afcdc534a41cff37cc55d809000a8a78736dae97d6f0d78cee07233b6d5229.scope - libcontainer container b7afcdc534a41cff37cc55d809000a8a78736dae97d6f0d78cee07233b6d5229. Jan 30 13:48:39.659957 kubelet[2149]: E0130 13:48:39.659888 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="1.6s" Jan 30 13:48:39.668419 kubelet[2149]: W0130 13:48:39.668251 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 30 13:48:39.668419 kubelet[2149]: E0130 13:48:39.668349 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:48:39.719427 containerd[1463]: time="2025-01-30T13:48:39.718877203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7afcdc534a41cff37cc55d809000a8a78736dae97d6f0d78cee07233b6d5229\"" Jan 30 13:48:39.721350 kubelet[2149]: E0130 13:48:39.721322 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:39.722956 containerd[1463]: time="2025-01-30T13:48:39.722915736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9796a4f9b37fec3f883d0e11caeb02a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"228881d2029525b61233c28687c4a5e49421c81bbc0fc867be4b9ec72d7e76f0\"" Jan 30 13:48:39.723777 containerd[1463]: time="2025-01-30T13:48:39.723705076Z" level=info msg="CreateContainer within sandbox \"b7afcdc534a41cff37cc55d809000a8a78736dae97d6f0d78cee07233b6d5229\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:48:39.724538 kubelet[2149]: E0130 13:48:39.724503 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:39.726693 containerd[1463]: time="2025-01-30T13:48:39.726663873Z" level=info msg="CreateContainer within sandbox \"228881d2029525b61233c28687c4a5e49421c81bbc0fc867be4b9ec72d7e76f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:48:39.729466 containerd[1463]: time="2025-01-30T13:48:39.729417436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"305edb91ce339971a71286646352640ab5f5c0741db893f9758f8cd9cdb67159\"" Jan 30 13:48:39.729957 kubelet[2149]: E0130 13:48:39.729930 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:39.731847 containerd[1463]: time="2025-01-30T13:48:39.731803671Z" level=info msg="CreateContainer within sandbox \"305edb91ce339971a71286646352640ab5f5c0741db893f9758f8cd9cdb67159\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:48:39.752106 containerd[1463]: time="2025-01-30T13:48:39.752053289Z" level=info msg="CreateContainer within sandbox \"228881d2029525b61233c28687c4a5e49421c81bbc0fc867be4b9ec72d7e76f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"17c411b80bb2f96057ea39301cd2b4633cc1cac440d50f817962fbdc498e5682\"" Jan 30 13:48:39.753155 containerd[1463]: time="2025-01-30T13:48:39.753101213Z" level=info msg="StartContainer for \"17c411b80bb2f96057ea39301cd2b4633cc1cac440d50f817962fbdc498e5682\"" Jan 30 13:48:39.759596 containerd[1463]: time="2025-01-30T13:48:39.759522183Z" level=info msg="CreateContainer within sandbox \"b7afcdc534a41cff37cc55d809000a8a78736dae97d6f0d78cee07233b6d5229\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe0128da99cf515c33ddcb2b74c685f9e3183331b2f1cca7890f0d172d023c25\"" Jan 30 13:48:39.760267 containerd[1463]: time="2025-01-30T13:48:39.760182531Z" level=info msg="StartContainer for \"fe0128da99cf515c33ddcb2b74c685f9e3183331b2f1cca7890f0d172d023c25\"" Jan 30 13:48:39.765339 containerd[1463]: time="2025-01-30T13:48:39.765177597Z" level=info msg="CreateContainer within sandbox \"305edb91ce339971a71286646352640ab5f5c0741db893f9758f8cd9cdb67159\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3aa72318bc653d1452e385f481cddf9a7c3b4faf42cbc7bba2b75a0af35232f6\"" Jan 30 13:48:39.765788 containerd[1463]: time="2025-01-30T13:48:39.765747014Z" level=info msg="StartContainer for \"3aa72318bc653d1452e385f481cddf9a7c3b4faf42cbc7bba2b75a0af35232f6\"" Jan 30 13:48:39.786989 systemd[1]: Started cri-containerd-17c411b80bb2f96057ea39301cd2b4633cc1cac440d50f817962fbdc498e5682.scope - libcontainer container 17c411b80bb2f96057ea39301cd2b4633cc1cac440d50f817962fbdc498e5682. Jan 30 13:48:39.791819 systemd[1]: Started cri-containerd-fe0128da99cf515c33ddcb2b74c685f9e3183331b2f1cca7890f0d172d023c25.scope - libcontainer container fe0128da99cf515c33ddcb2b74c685f9e3183331b2f1cca7890f0d172d023c25. Jan 30 13:48:39.796054 systemd[1]: Started cri-containerd-3aa72318bc653d1452e385f481cddf9a7c3b4faf42cbc7bba2b75a0af35232f6.scope - libcontainer container 3aa72318bc653d1452e385f481cddf9a7c3b4faf42cbc7bba2b75a0af35232f6. Jan 30 13:48:39.835290 kubelet[2149]: I0130 13:48:39.835203 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:48:39.836334 kubelet[2149]: E0130 13:48:39.836308 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 30 13:48:39.846421 containerd[1463]: time="2025-01-30T13:48:39.846245489Z" level=info msg="StartContainer for \"17c411b80bb2f96057ea39301cd2b4633cc1cac440d50f817962fbdc498e5682\" returns successfully" Jan 30 13:48:39.846421 containerd[1463]: time="2025-01-30T13:48:39.846366456Z" level=info msg="StartContainer for \"fe0128da99cf515c33ddcb2b74c685f9e3183331b2f1cca7890f0d172d023c25\" returns successfully" Jan 30 13:48:39.859426 containerd[1463]: time="2025-01-30T13:48:39.859372051Z" level=info msg="StartContainer for \"3aa72318bc653d1452e385f481cddf9a7c3b4faf42cbc7bba2b75a0af35232f6\" returns successfully" Jan 30 13:48:40.287510 kubelet[2149]: E0130 13:48:40.287473 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:40.289463 kubelet[2149]: E0130 13:48:40.289438 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:40.290880 kubelet[2149]: E0130 13:48:40.290826 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:41.284991 kubelet[2149]: E0130 13:48:41.284953 2149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:48:41.292012 kubelet[2149]: E0130 13:48:41.291988 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:41.438078 kubelet[2149]: I0130 13:48:41.438051 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:48:41.529394 kubelet[2149]: I0130 13:48:41.529360 2149 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:48:41.529495 kubelet[2149]: E0130 13:48:41.529404 2149 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:48:41.602677 kubelet[2149]: E0130 13:48:41.602634 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:41.703072 kubelet[2149]: E0130 13:48:41.703013 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:41.803801 kubelet[2149]: E0130 13:48:41.803760 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:41.904506 kubelet[2149]: E0130 13:48:41.904345 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:42.004980 kubelet[2149]: E0130 13:48:42.004912 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:42.105485 kubelet[2149]: E0130 13:48:42.105421 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:42.206175 kubelet[2149]: E0130 13:48:42.206003 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:42.244671 kubelet[2149]: I0130 13:48:42.244616 2149 apiserver.go:52] "Watching apiserver" Jan 30 13:48:42.251702 kubelet[2149]: I0130 13:48:42.251656 2149 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:48:43.161825 kubelet[2149]: E0130 13:48:43.161790 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:43.294114 kubelet[2149]: E0130 13:48:43.294076 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:43.454663 systemd[1]: Reloading requested from client PID 2420 ('systemctl') (unit session-9.scope)... Jan 30 13:48:43.454679 systemd[1]: Reloading... Jan 30 13:48:43.532331 zram_generator::config[2462]: No configuration found. Jan 30 13:48:43.638871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:48:43.729264 systemd[1]: Reloading finished in 274 ms. Jan 30 13:48:43.771666 kubelet[2149]: I0130 13:48:43.771598 2149 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:48:43.771870 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:43.784350 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:48:43.784621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:43.784667 systemd[1]: kubelet.service: Consumed 1.357s CPU time, 119.6M memory peak, 0B memory swap peak. Jan 30 13:48:43.792525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:43.934966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:43.940420 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:48:43.978268 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:43.978268 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:48:43.978268 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:43.978651 kubelet[2504]: I0130 13:48:43.978326 2504 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:48:43.986048 kubelet[2504]: I0130 13:48:43.985912 2504 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:48:43.986048 kubelet[2504]: I0130 13:48:43.985942 2504 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:48:43.986237 kubelet[2504]: I0130 13:48:43.986193 2504 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:48:43.987478 kubelet[2504]: I0130 13:48:43.987456 2504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:48:43.989421 kubelet[2504]: I0130 13:48:43.989359 2504 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:48:43.993244 kubelet[2504]: E0130 13:48:43.993188 2504 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:48:43.993244 kubelet[2504]: I0130 13:48:43.993231 2504 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:48:43.999112 kubelet[2504]: I0130 13:48:43.999082 2504 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:48:43.999261 kubelet[2504]: I0130 13:48:43.999234 2504 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:48:43.999419 kubelet[2504]: I0130 13:48:43.999375 2504 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:48:43.999568 kubelet[2504]: I0130 13:48:43.999410 2504 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:48:43.999568 kubelet[2504]: I0130 13:48:43.999565 2504 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:48:43.999663 kubelet[2504]: I0130 13:48:43.999574 2504 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:48:43.999663 kubelet[2504]: I0130 13:48:43.999605 2504 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:43.999729 kubelet[2504]: I0130 13:48:43.999712 2504 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:48:43.999729 kubelet[2504]: I0130 13:48:43.999727 2504 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:48:43.999899 kubelet[2504]: I0130 13:48:43.999756 2504 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:48:43.999899 kubelet[2504]: I0130 13:48:43.999770 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:48:44.001377 kubelet[2504]: I0130 13:48:44.001121 2504 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:48:44.002155 kubelet[2504]: I0130 13:48:44.002112 2504 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:48:44.003002 kubelet[2504]: I0130 13:48:44.002991 2504 server.go:1269] "Started kubelet" Jan 30 13:48:44.003640 kubelet[2504]: I0130 13:48:44.003583 2504 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:48:44.004233 kubelet[2504]: I0130 13:48:44.004201 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:48:44.004693 kubelet[2504]: I0130 13:48:44.004675 2504 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:48:44.005635 kubelet[2504]: I0130 13:48:44.005596 2504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:48:44.005961 kubelet[2504]: I0130 13:48:44.005949 2504 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:48:44.006405 kubelet[2504]: I0130 13:48:44.006376 2504 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:48:44.006550 kubelet[2504]: I0130 13:48:44.006528 2504 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:48:44.006937 kubelet[2504]: E0130 13:48:44.006909 2504 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:48:44.007373 kubelet[2504]: I0130 13:48:44.007300 2504 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:48:44.010163 kubelet[2504]: I0130 13:48:44.009644 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:48:44.010163 kubelet[2504]: I0130 13:48:44.009929 2504 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:48:44.010163 kubelet[2504]: I0130 13:48:44.009997 2504 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:48:44.013920 kubelet[2504]: E0130 13:48:44.011867 2504 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:48:44.013920 kubelet[2504]: I0130 13:48:44.011956 2504 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:48:44.021180 kubelet[2504]: I0130 13:48:44.020379 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:48:44.022344 kubelet[2504]: I0130 13:48:44.021763 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:48:44.022344 kubelet[2504]: I0130 13:48:44.021794 2504 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:48:44.022344 kubelet[2504]: I0130 13:48:44.021818 2504 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:48:44.022344 kubelet[2504]: E0130 13:48:44.021861 2504 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:48:44.043612 kubelet[2504]: I0130 13:48:44.043585 2504 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:48:44.043612 kubelet[2504]: I0130 13:48:44.043599 2504 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:48:44.043612 kubelet[2504]: I0130 13:48:44.043616 2504 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:44.043769 kubelet[2504]: I0130 13:48:44.043743 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:48:44.043769 kubelet[2504]: I0130 13:48:44.043752 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:48:44.043769 kubelet[2504]: I0130 13:48:44.043770 2504 policy_none.go:49] "None policy: Start" Jan 30 13:48:44.044353 kubelet[2504]: I0130 13:48:44.044334 2504 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:48:44.044390 kubelet[2504]: I0130 13:48:44.044355 2504 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:48:44.044482 kubelet[2504]: I0130 13:48:44.044472 2504 state_mem.go:75] "Updated machine memory state" Jan 30 13:48:44.048643 kubelet[2504]: I0130 13:48:44.048615 2504 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:48:44.048880 kubelet[2504]: I0130 13:48:44.048857 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:48:44.048908 kubelet[2504]: I0130 13:48:44.048871 2504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:48:44.049062 kubelet[2504]: I0130 13:48:44.049047 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:48:44.127465 kubelet[2504]: E0130 13:48:44.127403 2504 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:48:44.154226 kubelet[2504]: I0130 13:48:44.154180 2504 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:48:44.160988 kubelet[2504]: I0130 13:48:44.160956 2504 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 30 13:48:44.161124 kubelet[2504]: I0130 13:48:44.161032 2504 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:48:44.208552 kubelet[2504]: I0130 13:48:44.208490 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:44.208552 kubelet[2504]: I0130 13:48:44.208553 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9796a4f9b37fec3f883d0e11caeb02a0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9796a4f9b37fec3f883d0e11caeb02a0\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:48:44.208738 kubelet[2504]: I0130 13:48:44.208574 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9796a4f9b37fec3f883d0e11caeb02a0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9796a4f9b37fec3f883d0e11caeb02a0\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:48:44.208738 kubelet[2504]: I0130 13:48:44.208594 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:44.208738 kubelet[2504]: I0130 13:48:44.208614 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:44.208738 kubelet[2504]: I0130 13:48:44.208635 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:44.208738 kubelet[2504]: I0130 13:48:44.208653 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:44.208853 kubelet[2504]: I0130 13:48:44.208673 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:48:44.208853 kubelet[2504]: I0130 13:48:44.208696 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9796a4f9b37fec3f883d0e11caeb02a0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9796a4f9b37fec3f883d0e11caeb02a0\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:48:44.428583 kubelet[2504]: E0130 13:48:44.428539 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:44.428583 kubelet[2504]: E0130 13:48:44.428560 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:44.428765 kubelet[2504]: E0130 13:48:44.428653 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:45.000339 kubelet[2504]: I0130 13:48:45.000286 2504 apiserver.go:52] "Watching apiserver" Jan 30 13:48:45.006894 kubelet[2504]: I0130 13:48:45.006861 2504 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:48:45.033653 kubelet[2504]: E0130 13:48:45.033620 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:45.074900 kubelet[2504]: E0130 13:48:45.074832 2504 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:48:45.075052 kubelet[2504]: E0130 13:48:45.074907 2504 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:48:45.075052 kubelet[2504]: I0130 13:48:45.074977 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.074964986 podStartE2EDuration="1.074964986s" podCreationTimestamp="2025-01-30 13:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:45.073948997 +0000 UTC m=+1.129198906" watchObservedRunningTime="2025-01-30 13:48:45.074964986 +0000 UTC m=+1.130214895" Jan 30 13:48:45.075259 kubelet[2504]: E0130 13:48:45.075068 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:45.075259 kubelet[2504]: E0130 13:48:45.075079 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:45.090363 kubelet[2504]: I0130 13:48:45.089773 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.0897561040000001 podStartE2EDuration="1.089756104s" podCreationTimestamp="2025-01-30 13:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:45.082948511 +0000 UTC m=+1.138198420" watchObservedRunningTime="2025-01-30 13:48:45.089756104 +0000 UTC m=+1.145006013" Jan 30 13:48:45.098090 kubelet[2504]: I0130 13:48:45.098020 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.098000858 podStartE2EDuration="2.098000858s" podCreationTimestamp="2025-01-30 13:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:45.090091945 +0000 UTC m=+1.145341854" watchObservedRunningTime="2025-01-30 13:48:45.098000858 +0000 UTC m=+1.153250768" Jan 30 13:48:46.034888 kubelet[2504]: E0130 13:48:46.034601 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:46.037261 kubelet[2504]: E0130 13:48:46.036654 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:47.037522 kubelet[2504]: E0130 13:48:47.037495 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:48.506788 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 30 13:48:48.508581 sshd[1656]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:48.512489 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:53858.service: Deactivated successfully. Jan 30 13:48:48.514562 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:48:48.514746 systemd[1]: session-9.scope: Consumed 4.262s CPU time, 157.0M memory peak, 0B memory swap peak. Jan 30 13:48:48.515380 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:48:48.516373 systemd-logind[1449]: Removed session 9. Jan 30 13:48:48.691456 kubelet[2504]: I0130 13:48:48.691414 2504 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:48:48.691873 containerd[1463]: time="2025-01-30T13:48:48.691815015Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:48:48.692106 kubelet[2504]: I0130 13:48:48.691985 2504 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:48:49.586904 systemd[1]: Created slice kubepods-besteffort-pode7291e33_4279_41a3_83ac_a46ddcbfbb15.slice - libcontainer container kubepods-besteffort-pode7291e33_4279_41a3_83ac_a46ddcbfbb15.slice. Jan 30 13:48:49.644861 kubelet[2504]: I0130 13:48:49.644797 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7291e33-4279-41a3-83ac-a46ddcbfbb15-xtables-lock\") pod \"kube-proxy-blfmg\" (UID: \"e7291e33-4279-41a3-83ac-a46ddcbfbb15\") " pod="kube-system/kube-proxy-blfmg" Jan 30 13:48:49.644861 kubelet[2504]: I0130 13:48:49.644840 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7291e33-4279-41a3-83ac-a46ddcbfbb15-lib-modules\") pod \"kube-proxy-blfmg\" (UID: \"e7291e33-4279-41a3-83ac-a46ddcbfbb15\") " pod="kube-system/kube-proxy-blfmg" Jan 30 13:48:49.644861 kubelet[2504]: I0130 13:48:49.644861 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7291e33-4279-41a3-83ac-a46ddcbfbb15-kube-proxy\") pod \"kube-proxy-blfmg\" (UID: \"e7291e33-4279-41a3-83ac-a46ddcbfbb15\") " pod="kube-system/kube-proxy-blfmg" Jan 30 13:48:49.645085 kubelet[2504]: I0130 13:48:49.644882 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgdpf\" (UniqueName: \"kubernetes.io/projected/e7291e33-4279-41a3-83ac-a46ddcbfbb15-kube-api-access-kgdpf\") pod \"kube-proxy-blfmg\" (UID: \"e7291e33-4279-41a3-83ac-a46ddcbfbb15\") " pod="kube-system/kube-proxy-blfmg" Jan 30 13:48:49.848471 systemd[1]: Created slice kubepods-besteffort-pod9336c609_9a8a_4dba_96d7_d3da1f664b1c.slice - libcontainer container kubepods-besteffort-pod9336c609_9a8a_4dba_96d7_d3da1f664b1c.slice. Jan 30 13:48:49.898189 kubelet[2504]: E0130 13:48:49.898156 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:49.899271 containerd[1463]: time="2025-01-30T13:48:49.899216910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blfmg,Uid:e7291e33-4279-41a3-83ac-a46ddcbfbb15,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:49.924639 containerd[1463]: time="2025-01-30T13:48:49.924519726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:49.924639 containerd[1463]: time="2025-01-30T13:48:49.924607454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:49.924879 containerd[1463]: time="2025-01-30T13:48:49.924626019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:49.924879 containerd[1463]: time="2025-01-30T13:48:49.924710008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:49.945269 systemd[1]: Started cri-containerd-a320c8214ee93985385a3d99a644db6be47afa40dcc7625683a8f90381e6275e.scope - libcontainer container a320c8214ee93985385a3d99a644db6be47afa40dcc7625683a8f90381e6275e. Jan 30 13:48:49.946799 kubelet[2504]: I0130 13:48:49.946767 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9336c609-9a8a-4dba-96d7-d3da1f664b1c-var-lib-calico\") pod \"tigera-operator-76c4976dd7-9rnbq\" (UID: \"9336c609-9a8a-4dba-96d7-d3da1f664b1c\") " pod="tigera-operator/tigera-operator-76c4976dd7-9rnbq" Jan 30 13:48:49.946862 kubelet[2504]: I0130 13:48:49.946810 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wfxh\" (UniqueName: \"kubernetes.io/projected/9336c609-9a8a-4dba-96d7-d3da1f664b1c-kube-api-access-2wfxh\") pod \"tigera-operator-76c4976dd7-9rnbq\" (UID: \"9336c609-9a8a-4dba-96d7-d3da1f664b1c\") " pod="tigera-operator/tigera-operator-76c4976dd7-9rnbq" Jan 30 13:48:49.964647 containerd[1463]: time="2025-01-30T13:48:49.964602779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blfmg,Uid:e7291e33-4279-41a3-83ac-a46ddcbfbb15,Namespace:kube-system,Attempt:0,} returns sandbox id \"a320c8214ee93985385a3d99a644db6be47afa40dcc7625683a8f90381e6275e\"" Jan 30 13:48:49.965212 kubelet[2504]: E0130 13:48:49.965192 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:49.966853 containerd[1463]: time="2025-01-30T13:48:49.966825654Z" level=info msg="CreateContainer within sandbox \"a320c8214ee93985385a3d99a644db6be47afa40dcc7625683a8f90381e6275e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:48:49.982576 containerd[1463]: time="2025-01-30T13:48:49.982546206Z" level=info msg="CreateContainer within sandbox \"a320c8214ee93985385a3d99a644db6be47afa40dcc7625683a8f90381e6275e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb073cfb5d7e351cdaf92d0e0abd9444918847c95440d98fba89168a8d0871ad\"" Jan 30 13:48:49.983054 containerd[1463]: time="2025-01-30T13:48:49.983027993Z" level=info msg="StartContainer for \"eb073cfb5d7e351cdaf92d0e0abd9444918847c95440d98fba89168a8d0871ad\"" Jan 30 13:48:50.015276 systemd[1]: Started cri-containerd-eb073cfb5d7e351cdaf92d0e0abd9444918847c95440d98fba89168a8d0871ad.scope - libcontainer container eb073cfb5d7e351cdaf92d0e0abd9444918847c95440d98fba89168a8d0871ad. Jan 30 13:48:50.042949 containerd[1463]: time="2025-01-30T13:48:50.042912326Z" level=info msg="StartContainer for \"eb073cfb5d7e351cdaf92d0e0abd9444918847c95440d98fba89168a8d0871ad\" returns successfully" Jan 30 13:48:50.151871 containerd[1463]: time="2025-01-30T13:48:50.151751317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-9rnbq,Uid:9336c609-9a8a-4dba-96d7-d3da1f664b1c,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:48:50.182160 containerd[1463]: time="2025-01-30T13:48:50.182053037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:50.182592 containerd[1463]: time="2025-01-30T13:48:50.182122910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:50.182592 containerd[1463]: time="2025-01-30T13:48:50.182315636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:50.183222 containerd[1463]: time="2025-01-30T13:48:50.182808111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:50.206321 systemd[1]: Started cri-containerd-f9d1c8ecfce87dcbde1318ab1cc0a798d7cf4e859632fcc5508069859e94196c.scope - libcontainer container f9d1c8ecfce87dcbde1318ab1cc0a798d7cf4e859632fcc5508069859e94196c. Jan 30 13:48:50.240079 containerd[1463]: time="2025-01-30T13:48:50.240004791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-9rnbq,Uid:9336c609-9a8a-4dba-96d7-d3da1f664b1c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9d1c8ecfce87dcbde1318ab1cc0a798d7cf4e859632fcc5508069859e94196c\"" Jan 30 13:48:50.241602 containerd[1463]: time="2025-01-30T13:48:50.241559144Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:48:51.047428 kubelet[2504]: E0130 13:48:51.047394 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:51.504057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600590932.mount: Deactivated successfully. Jan 30 13:48:51.824204 containerd[1463]: time="2025-01-30T13:48:51.824143818Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:51.824999 containerd[1463]: time="2025-01-30T13:48:51.824968433Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:48:51.826223 containerd[1463]: time="2025-01-30T13:48:51.826176986Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:51.828958 containerd[1463]: time="2025-01-30T13:48:51.828921084Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:51.829994 containerd[1463]: time="2025-01-30T13:48:51.829954384Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.58835843s" Jan 30 13:48:51.830039 containerd[1463]: time="2025-01-30T13:48:51.829996945Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:48:51.832300 containerd[1463]: time="2025-01-30T13:48:51.832243730Z" level=info msg="CreateContainer within sandbox \"f9d1c8ecfce87dcbde1318ab1cc0a798d7cf4e859632fcc5508069859e94196c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:48:51.844069 containerd[1463]: time="2025-01-30T13:48:51.844018545Z" level=info msg="CreateContainer within sandbox \"f9d1c8ecfce87dcbde1318ab1cc0a798d7cf4e859632fcc5508069859e94196c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fb9a8a1785f88c414a9fee43e0fc5295710538d1d9077455ec6c6170c339bae0\"" Jan 30 13:48:51.845264 containerd[1463]: time="2025-01-30T13:48:51.844540977Z" level=info msg="StartContainer for \"fb9a8a1785f88c414a9fee43e0fc5295710538d1d9077455ec6c6170c339bae0\"" Jan 30 13:48:51.880288 systemd[1]: Started cri-containerd-fb9a8a1785f88c414a9fee43e0fc5295710538d1d9077455ec6c6170c339bae0.scope - libcontainer container fb9a8a1785f88c414a9fee43e0fc5295710538d1d9077455ec6c6170c339bae0. Jan 30 13:48:52.014073 containerd[1463]: time="2025-01-30T13:48:52.014016374Z" level=info msg="StartContainer for \"fb9a8a1785f88c414a9fee43e0fc5295710538d1d9077455ec6c6170c339bae0\" returns successfully" Jan 30 13:48:52.058820 kubelet[2504]: I0130 13:48:52.058742 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-blfmg" podStartSLOduration=3.058721509 podStartE2EDuration="3.058721509s" podCreationTimestamp="2025-01-30 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:51.054780971 +0000 UTC m=+7.110030881" watchObservedRunningTime="2025-01-30 13:48:52.058721509 +0000 UTC m=+8.113971418" Jan 30 13:48:52.059319 kubelet[2504]: I0130 13:48:52.058882 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-9rnbq" podStartSLOduration=1.46908983 podStartE2EDuration="3.05887526s" podCreationTimestamp="2025-01-30 13:48:49 +0000 UTC" firstStartedPulling="2025-01-30 13:48:50.241187548 +0000 UTC m=+6.296437457" lastFinishedPulling="2025-01-30 13:48:51.830972978 +0000 UTC m=+7.886222887" observedRunningTime="2025-01-30 13:48:52.058538532 +0000 UTC m=+8.113788451" watchObservedRunningTime="2025-01-30 13:48:52.05887526 +0000 UTC m=+8.114125189" Jan 30 13:48:53.226878 kubelet[2504]: E0130 13:48:53.226826 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:53.531871 update_engine[1450]: I20250130 13:48:53.531693 1450 update_attempter.cc:509] Updating boot flags... Jan 30 13:48:53.558196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2894) Jan 30 13:48:53.591226 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2896) Jan 30 13:48:53.620186 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2896) Jan 30 13:48:54.052690 kubelet[2504]: E0130 13:48:54.052651 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:54.464216 kubelet[2504]: E0130 13:48:54.464083 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:54.784745 systemd[1]: Created slice kubepods-besteffort-pode10106f7_8042_4b91_847e_f48eba4ca05c.slice - libcontainer container kubepods-besteffort-pode10106f7_8042_4b91_847e_f48eba4ca05c.slice. Jan 30 13:48:54.886818 kubelet[2504]: I0130 13:48:54.886685 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e10106f7-8042-4b91-847e-f48eba4ca05c-typha-certs\") pod \"calico-typha-85dfcd5887-wm4sz\" (UID: \"e10106f7-8042-4b91-847e-f48eba4ca05c\") " pod="calico-system/calico-typha-85dfcd5887-wm4sz" Jan 30 13:48:54.886818 kubelet[2504]: I0130 13:48:54.886725 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e10106f7-8042-4b91-847e-f48eba4ca05c-tigera-ca-bundle\") pod \"calico-typha-85dfcd5887-wm4sz\" (UID: \"e10106f7-8042-4b91-847e-f48eba4ca05c\") " pod="calico-system/calico-typha-85dfcd5887-wm4sz" Jan 30 13:48:54.886818 kubelet[2504]: I0130 13:48:54.886750 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmwq\" (UniqueName: \"kubernetes.io/projected/e10106f7-8042-4b91-847e-f48eba4ca05c-kube-api-access-htmwq\") pod \"calico-typha-85dfcd5887-wm4sz\" (UID: \"e10106f7-8042-4b91-847e-f48eba4ca05c\") " pod="calico-system/calico-typha-85dfcd5887-wm4sz" Jan 30 13:48:54.890695 systemd[1]: Created slice kubepods-besteffort-pod7222762e_e4ad_4b8f_9c4e_3d84d7705e6a.slice - libcontainer container kubepods-besteffort-pod7222762e_e4ad_4b8f_9c4e_3d84d7705e6a.slice. Jan 30 13:48:54.987751 kubelet[2504]: I0130 13:48:54.987654 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-flexvol-driver-host\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.987751 kubelet[2504]: I0130 13:48:54.987710 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-policysync\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.987751 kubelet[2504]: I0130 13:48:54.987728 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-var-run-calico\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.987751 kubelet[2504]: I0130 13:48:54.987742 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-var-lib-calico\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.987751 kubelet[2504]: I0130 13:48:54.987757 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-cni-log-dir\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.988255 kubelet[2504]: I0130 13:48:54.987771 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-xtables-lock\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.988255 kubelet[2504]: I0130 13:48:54.987834 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-cni-bin-dir\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.988255 kubelet[2504]: I0130 13:48:54.987900 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-lib-modules\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.988255 kubelet[2504]: I0130 13:48:54.988197 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-tigera-ca-bundle\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.988255 kubelet[2504]: I0130 13:48:54.988221 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbfwh\" (UniqueName: \"kubernetes.io/projected/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-kube-api-access-gbfwh\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.988418 kubelet[2504]: I0130 13:48:54.988255 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-node-certs\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.988418 kubelet[2504]: I0130 13:48:54.988280 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7222762e-e4ad-4b8f-9c4e-3d84d7705e6a-cni-net-dir\") pod \"calico-node-jv9qh\" (UID: \"7222762e-e4ad-4b8f-9c4e-3d84d7705e6a\") " pod="calico-system/calico-node-jv9qh" Jan 30 13:48:54.992162 kubelet[2504]: E0130 13:48:54.990537 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jx5hr" podUID="7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc" Jan 30 13:48:55.088670 kubelet[2504]: I0130 13:48:55.088521 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmrhs\" (UniqueName: \"kubernetes.io/projected/7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc-kube-api-access-cmrhs\") pod \"csi-node-driver-jx5hr\" (UID: \"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc\") " pod="calico-system/csi-node-driver-jx5hr" Jan 30 13:48:55.088670 kubelet[2504]: I0130 13:48:55.088611 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc-kubelet-dir\") pod \"csi-node-driver-jx5hr\" (UID: \"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc\") " pod="calico-system/csi-node-driver-jx5hr" Jan 30 13:48:55.088670 kubelet[2504]: I0130 13:48:55.088628 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc-socket-dir\") pod \"csi-node-driver-jx5hr\" (UID: \"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc\") " pod="calico-system/csi-node-driver-jx5hr" Jan 30 13:48:55.088906 kubelet[2504]: I0130 13:48:55.088709 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc-varrun\") pod \"csi-node-driver-jx5hr\" (UID: \"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc\") " pod="calico-system/csi-node-driver-jx5hr" Jan 30 13:48:55.088906 kubelet[2504]: I0130 13:48:55.088732 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc-registration-dir\") pod \"csi-node-driver-jx5hr\" (UID: \"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc\") " pod="calico-system/csi-node-driver-jx5hr" Jan 30 13:48:55.090821 kubelet[2504]: E0130 13:48:55.090040 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.090821 kubelet[2504]: W0130 13:48:55.090061 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.090821 kubelet[2504]: E0130 13:48:55.090080 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.092590 kubelet[2504]: E0130 13:48:55.092567 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.092590 kubelet[2504]: W0130 13:48:55.092586 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.092656 kubelet[2504]: E0130 13:48:55.092628 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.096159 kubelet[2504]: E0130 13:48:55.094037 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:55.096260 containerd[1463]: time="2025-01-30T13:48:55.095013822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85dfcd5887-wm4sz,Uid:e10106f7-8042-4b91-847e-f48eba4ca05c,Namespace:calico-system,Attempt:0,}" Jan 30 13:48:55.097657 kubelet[2504]: E0130 13:48:55.097631 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.097731 kubelet[2504]: W0130 13:48:55.097717 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.097789 kubelet[2504]: E0130 13:48:55.097771 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.169384 containerd[1463]: time="2025-01-30T13:48:55.169215783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:55.169384 containerd[1463]: time="2025-01-30T13:48:55.169285545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:55.169384 containerd[1463]: time="2025-01-30T13:48:55.169301335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:55.169544 containerd[1463]: time="2025-01-30T13:48:55.169424779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:55.189275 kubelet[2504]: E0130 13:48:55.189234 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.189275 kubelet[2504]: W0130 13:48:55.189260 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.189275 kubelet[2504]: E0130 13:48:55.189280 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.189534 kubelet[2504]: E0130 13:48:55.189511 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.189534 kubelet[2504]: W0130 13:48:55.189524 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.189534 kubelet[2504]: E0130 13:48:55.189536 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.189774 kubelet[2504]: E0130 13:48:55.189749 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.189774 kubelet[2504]: W0130 13:48:55.189761 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.189774 kubelet[2504]: E0130 13:48:55.189772 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.190007 kubelet[2504]: E0130 13:48:55.189991 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.190007 kubelet[2504]: W0130 13:48:55.190003 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.190075 kubelet[2504]: E0130 13:48:55.190014 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.190267 kubelet[2504]: E0130 13:48:55.190249 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.190267 kubelet[2504]: W0130 13:48:55.190261 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.190340 kubelet[2504]: E0130 13:48:55.190272 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.190517 kubelet[2504]: E0130 13:48:55.190501 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.190517 kubelet[2504]: W0130 13:48:55.190513 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.190586 kubelet[2504]: E0130 13:48:55.190524 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.190760 kubelet[2504]: E0130 13:48:55.190716 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.190760 kubelet[2504]: W0130 13:48:55.190727 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.190828 kubelet[2504]: E0130 13:48:55.190780 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.190983 kubelet[2504]: E0130 13:48:55.190966 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.190983 kubelet[2504]: W0130 13:48:55.190977 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.191052 kubelet[2504]: E0130 13:48:55.191017 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.191356 systemd[1]: Started cri-containerd-7731250bbf5f2d107503e669addb17e23498c610654f0ddc136011c263aaf4b3.scope - libcontainer container 7731250bbf5f2d107503e669addb17e23498c610654f0ddc136011c263aaf4b3. Jan 30 13:48:55.191593 kubelet[2504]: E0130 13:48:55.191573 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.191593 kubelet[2504]: W0130 13:48:55.191585 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.191668 kubelet[2504]: E0130 13:48:55.191636 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.192412 kubelet[2504]: E0130 13:48:55.191893 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.192412 kubelet[2504]: W0130 13:48:55.191906 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.192412 kubelet[2504]: E0130 13:48:55.191988 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.192412 kubelet[2504]: E0130 13:48:55.192104 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.192412 kubelet[2504]: W0130 13:48:55.192111 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.192412 kubelet[2504]: E0130 13:48:55.192175 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.192412 kubelet[2504]: E0130 13:48:55.192368 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.192412 kubelet[2504]: W0130 13:48:55.192376 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.192412 kubelet[2504]: E0130 13:48:55.192418 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.192691 kubelet[2504]: E0130 13:48:55.192610 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.192691 kubelet[2504]: W0130 13:48:55.192616 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.192691 kubelet[2504]: E0130 13:48:55.192670 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.192888 kubelet[2504]: E0130 13:48:55.192871 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.192888 kubelet[2504]: W0130 13:48:55.192882 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.192957 kubelet[2504]: E0130 13:48:55.192893 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.193117 kubelet[2504]: E0130 13:48:55.193078 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.193117 kubelet[2504]: W0130 13:48:55.193091 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.193218 kubelet[2504]: E0130 13:48:55.193157 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.193433 kubelet[2504]: E0130 13:48:55.193416 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.193433 kubelet[2504]: W0130 13:48:55.193428 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.193606 kubelet[2504]: E0130 13:48:55.193587 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:55.193960 kubelet[2504]: E0130 13:48:55.193892 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.194270 containerd[1463]: time="2025-01-30T13:48:55.194227960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jv9qh,Uid:7222762e-e4ad-4b8f-9c4e-3d84d7705e6a,Namespace:calico-system,Attempt:0,}" Jan 30 13:48:55.194734 kubelet[2504]: E0130 13:48:55.194699 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.194734 kubelet[2504]: W0130 13:48:55.194714 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.194910 kubelet[2504]: E0130 13:48:55.194829 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.203616 kubelet[2504]: E0130 13:48:55.203585 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.203616 kubelet[2504]: W0130 13:48:55.203608 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.203795 kubelet[2504]: E0130 13:48:55.203694 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.203864 kubelet[2504]: E0130 13:48:55.203849 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.203903 kubelet[2504]: W0130 13:48:55.203864 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.204173 kubelet[2504]: E0130 13:48:55.204128 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.204771 kubelet[2504]: E0130 13:48:55.204746 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.204771 kubelet[2504]: W0130 13:48:55.204761 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.204856 kubelet[2504]: E0130 13:48:55.204836 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.205697 kubelet[2504]: E0130 13:48:55.205669 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.205697 kubelet[2504]: W0130 13:48:55.205687 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.205829 kubelet[2504]: E0130 13:48:55.205728 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.205954 kubelet[2504]: E0130 13:48:55.205937 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.205954 kubelet[2504]: W0130 13:48:55.205950 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.206030 kubelet[2504]: E0130 13:48:55.205983 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.206385 kubelet[2504]: E0130 13:48:55.206368 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.206385 kubelet[2504]: W0130 13:48:55.206381 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.206482 kubelet[2504]: E0130 13:48:55.206408 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.206653 kubelet[2504]: E0130 13:48:55.206629 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.206653 kubelet[2504]: W0130 13:48:55.206644 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.206753 kubelet[2504]: E0130 13:48:55.206662 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.207426 kubelet[2504]: E0130 13:48:55.207284 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.207426 kubelet[2504]: W0130 13:48:55.207298 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.207426 kubelet[2504]: E0130 13:48:55.207310 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.214510 kubelet[2504]: E0130 13:48:55.214477 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:55.214510 kubelet[2504]: W0130 13:48:55.214505 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:55.214589 kubelet[2504]: E0130 13:48:55.214519 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:55.228892 containerd[1463]: time="2025-01-30T13:48:55.228853491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85dfcd5887-wm4sz,Uid:e10106f7-8042-4b91-847e-f48eba4ca05c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7731250bbf5f2d107503e669addb17e23498c610654f0ddc136011c263aaf4b3\"" Jan 30 13:48:55.229709 kubelet[2504]: E0130 13:48:55.229684 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:55.233270 containerd[1463]: time="2025-01-30T13:48:55.233236736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:48:55.250422 containerd[1463]: time="2025-01-30T13:48:55.250355053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:55.250422 containerd[1463]: time="2025-01-30T13:48:55.250408445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:55.250422 containerd[1463]: time="2025-01-30T13:48:55.250422100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:55.250730 containerd[1463]: time="2025-01-30T13:48:55.250680059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:55.271277 systemd[1]: Started cri-containerd-0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb.scope - libcontainer container 0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb. Jan 30 13:48:55.290719 containerd[1463]: time="2025-01-30T13:48:55.290672586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jv9qh,Uid:7222762e-e4ad-4b8f-9c4e-3d84d7705e6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb\"" Jan 30 13:48:55.291372 kubelet[2504]: E0130 13:48:55.291343 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:56.156485 kubelet[2504]: E0130 13:48:56.156452 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:56.181267 kubelet[2504]: E0130 13:48:56.181214 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.181267 kubelet[2504]: W0130 13:48:56.181253 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.181267 kubelet[2504]: E0130 13:48:56.181273 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.181445 kubelet[2504]: E0130 13:48:56.181432 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.181445 kubelet[2504]: W0130 13:48:56.181439 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.181490 kubelet[2504]: E0130 13:48:56.181447 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.181616 kubelet[2504]: E0130 13:48:56.181604 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.181616 kubelet[2504]: W0130 13:48:56.181614 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.181681 kubelet[2504]: E0130 13:48:56.181621 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.181783 kubelet[2504]: E0130 13:48:56.181772 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.181783 kubelet[2504]: W0130 13:48:56.181781 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.181848 kubelet[2504]: E0130 13:48:56.181788 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.181981 kubelet[2504]: E0130 13:48:56.181965 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.181981 kubelet[2504]: W0130 13:48:56.181974 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.181981 kubelet[2504]: E0130 13:48:56.181981 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.182156 kubelet[2504]: E0130 13:48:56.182145 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.182156 kubelet[2504]: W0130 13:48:56.182154 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.182213 kubelet[2504]: E0130 13:48:56.182163 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.182427 kubelet[2504]: E0130 13:48:56.182415 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.182427 kubelet[2504]: W0130 13:48:56.182424 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.182491 kubelet[2504]: E0130 13:48:56.182432 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.182603 kubelet[2504]: E0130 13:48:56.182591 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.182630 kubelet[2504]: W0130 13:48:56.182608 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.182630 kubelet[2504]: E0130 13:48:56.182615 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.182789 kubelet[2504]: E0130 13:48:56.182778 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.182789 kubelet[2504]: W0130 13:48:56.182786 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.182845 kubelet[2504]: E0130 13:48:56.182793 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.182973 kubelet[2504]: E0130 13:48:56.182961 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.182973 kubelet[2504]: W0130 13:48:56.182970 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.183034 kubelet[2504]: E0130 13:48:56.182977 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.183164 kubelet[2504]: E0130 13:48:56.183149 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.183164 kubelet[2504]: W0130 13:48:56.183158 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.183252 kubelet[2504]: E0130 13:48:56.183165 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.183388 kubelet[2504]: E0130 13:48:56.183371 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.183388 kubelet[2504]: W0130 13:48:56.183383 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.183447 kubelet[2504]: E0130 13:48:56.183392 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.183587 kubelet[2504]: E0130 13:48:56.183570 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.183587 kubelet[2504]: W0130 13:48:56.183580 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.183643 kubelet[2504]: E0130 13:48:56.183589 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.183804 kubelet[2504]: E0130 13:48:56.183788 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.183804 kubelet[2504]: W0130 13:48:56.183799 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.183871 kubelet[2504]: E0130 13:48:56.183809 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.184000 kubelet[2504]: E0130 13:48:56.183985 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:56.184000 kubelet[2504]: W0130 13:48:56.183996 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:56.184051 kubelet[2504]: E0130 13:48:56.184007 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:56.511267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903747105.mount: Deactivated successfully. Jan 30 13:48:57.022248 kubelet[2504]: E0130 13:48:57.022161 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jx5hr" podUID="7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc" Jan 30 13:48:57.648486 containerd[1463]: time="2025-01-30T13:48:57.648432275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:57.649286 containerd[1463]: time="2025-01-30T13:48:57.649207340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:48:57.650505 containerd[1463]: time="2025-01-30T13:48:57.650474876Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:57.652563 containerd[1463]: time="2025-01-30T13:48:57.652520023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:57.653105 containerd[1463]: time="2025-01-30T13:48:57.653070975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.419795744s" Jan 30 13:48:57.653105 containerd[1463]: time="2025-01-30T13:48:57.653100300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:48:57.654356 containerd[1463]: time="2025-01-30T13:48:57.654018074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:48:57.662230 containerd[1463]: time="2025-01-30T13:48:57.662191206Z" level=info msg="CreateContainer within sandbox \"7731250bbf5f2d107503e669addb17e23498c610654f0ddc136011c263aaf4b3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:48:57.676269 containerd[1463]: time="2025-01-30T13:48:57.676215307Z" level=info msg="CreateContainer within sandbox \"7731250bbf5f2d107503e669addb17e23498c610654f0ddc136011c263aaf4b3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a5a014df5bbd748c4d0e2858203ab481efbf0628e1fc6a87889a315121eca145\"" Jan 30 13:48:57.676743 containerd[1463]: time="2025-01-30T13:48:57.676708740Z" level=info msg="StartContainer for \"a5a014df5bbd748c4d0e2858203ab481efbf0628e1fc6a87889a315121eca145\"" Jan 30 13:48:57.713289 systemd[1]: Started cri-containerd-a5a014df5bbd748c4d0e2858203ab481efbf0628e1fc6a87889a315121eca145.scope - libcontainer container a5a014df5bbd748c4d0e2858203ab481efbf0628e1fc6a87889a315121eca145. Jan 30 13:48:57.834811 containerd[1463]: time="2025-01-30T13:48:57.834769702Z" level=info msg="StartContainer for \"a5a014df5bbd748c4d0e2858203ab481efbf0628e1fc6a87889a315121eca145\" returns successfully" Jan 30 13:48:58.180081 kubelet[2504]: E0130 13:48:58.180033 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:58.190816 kubelet[2504]: I0130 13:48:58.190759 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85dfcd5887-wm4sz" podStartSLOduration=1.767185118 podStartE2EDuration="4.190741939s" podCreationTimestamp="2025-01-30 13:48:54 +0000 UTC" firstStartedPulling="2025-01-30 13:48:55.230307773 +0000 UTC m=+11.285557683" lastFinishedPulling="2025-01-30 13:48:57.653864595 +0000 UTC m=+13.709114504" observedRunningTime="2025-01-30 13:48:58.190559725 +0000 UTC m=+14.245809644" watchObservedRunningTime="2025-01-30 13:48:58.190741939 +0000 UTC m=+14.245991848" Jan 30 13:48:58.197033 kubelet[2504]: E0130 13:48:58.197005 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.197033 kubelet[2504]: W0130 13:48:58.197023 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.197128 kubelet[2504]: E0130 13:48:58.197043 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.197363 kubelet[2504]: E0130 13:48:58.197339 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.197363 kubelet[2504]: W0130 13:48:58.197350 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.197363 kubelet[2504]: E0130 13:48:58.197361 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.197611 kubelet[2504]: E0130 13:48:58.197586 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.197611 kubelet[2504]: W0130 13:48:58.197598 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.197611 kubelet[2504]: E0130 13:48:58.197609 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.197864 kubelet[2504]: E0130 13:48:58.197850 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.197864 kubelet[2504]: W0130 13:48:58.197861 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.197944 kubelet[2504]: E0130 13:48:58.197871 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.198117 kubelet[2504]: E0130 13:48:58.198094 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.198117 kubelet[2504]: W0130 13:48:58.198106 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.198209 kubelet[2504]: E0130 13:48:58.198116 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.198392 kubelet[2504]: E0130 13:48:58.198375 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.198392 kubelet[2504]: W0130 13:48:58.198388 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.198466 kubelet[2504]: E0130 13:48:58.198399 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.198629 kubelet[2504]: E0130 13:48:58.198615 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.198629 kubelet[2504]: W0130 13:48:58.198626 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.198704 kubelet[2504]: E0130 13:48:58.198637 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.198863 kubelet[2504]: E0130 13:48:58.198849 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.198863 kubelet[2504]: W0130 13:48:58.198860 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.198938 kubelet[2504]: E0130 13:48:58.198869 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.199114 kubelet[2504]: E0130 13:48:58.199093 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.199114 kubelet[2504]: W0130 13:48:58.199111 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.199202 kubelet[2504]: E0130 13:48:58.199124 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.199390 kubelet[2504]: E0130 13:48:58.199366 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.199390 kubelet[2504]: W0130 13:48:58.199389 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.199390 kubelet[2504]: E0130 13:48:58.199397 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.199557 kubelet[2504]: E0130 13:48:58.199550 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.199583 kubelet[2504]: W0130 13:48:58.199558 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.199583 kubelet[2504]: E0130 13:48:58.199567 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.199819 kubelet[2504]: E0130 13:48:58.199793 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.199819 kubelet[2504]: W0130 13:48:58.199816 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.199901 kubelet[2504]: E0130 13:48:58.199843 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.200154 kubelet[2504]: E0130 13:48:58.200103 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.200154 kubelet[2504]: W0130 13:48:58.200124 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.200228 kubelet[2504]: E0130 13:48:58.200155 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.200576 kubelet[2504]: E0130 13:48:58.200395 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.200576 kubelet[2504]: W0130 13:48:58.200409 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.200576 kubelet[2504]: E0130 13:48:58.200418 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.200720 kubelet[2504]: E0130 13:48:58.200621 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.200720 kubelet[2504]: W0130 13:48:58.200629 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.200720 kubelet[2504]: E0130 13:48:58.200639 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.212256 kubelet[2504]: E0130 13:48:58.212237 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.212256 kubelet[2504]: W0130 13:48:58.212251 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.212354 kubelet[2504]: E0130 13:48:58.212262 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.212505 kubelet[2504]: E0130 13:48:58.212490 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.212505 kubelet[2504]: W0130 13:48:58.212501 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.212574 kubelet[2504]: E0130 13:48:58.212514 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.212783 kubelet[2504]: E0130 13:48:58.212763 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.212783 kubelet[2504]: W0130 13:48:58.212777 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.212865 kubelet[2504]: E0130 13:48:58.212793 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.213070 kubelet[2504]: E0130 13:48:58.213052 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.213115 kubelet[2504]: W0130 13:48:58.213070 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.213115 kubelet[2504]: E0130 13:48:58.213093 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.213350 kubelet[2504]: E0130 13:48:58.213335 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.213350 kubelet[2504]: W0130 13:48:58.213346 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.213423 kubelet[2504]: E0130 13:48:58.213363 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.213594 kubelet[2504]: E0130 13:48:58.213582 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.213594 kubelet[2504]: W0130 13:48:58.213593 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.213662 kubelet[2504]: E0130 13:48:58.213607 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.213848 kubelet[2504]: E0130 13:48:58.213829 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.213848 kubelet[2504]: W0130 13:48:58.213842 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.213926 kubelet[2504]: E0130 13:48:58.213872 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.214070 kubelet[2504]: E0130 13:48:58.214052 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.214070 kubelet[2504]: W0130 13:48:58.214066 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.214174 kubelet[2504]: E0130 13:48:58.214111 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.214308 kubelet[2504]: E0130 13:48:58.214291 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.214308 kubelet[2504]: W0130 13:48:58.214303 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.214387 kubelet[2504]: E0130 13:48:58.214321 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.214605 kubelet[2504]: E0130 13:48:58.214574 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.214605 kubelet[2504]: W0130 13:48:58.214592 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.214680 kubelet[2504]: E0130 13:48:58.214610 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.214853 kubelet[2504]: E0130 13:48:58.214838 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.214881 kubelet[2504]: W0130 13:48:58.214851 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.214881 kubelet[2504]: E0130 13:48:58.214869 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.215089 kubelet[2504]: E0130 13:48:58.215077 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.215089 kubelet[2504]: W0130 13:48:58.215088 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.215153 kubelet[2504]: E0130 13:48:58.215104 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.215343 kubelet[2504]: E0130 13:48:58.215329 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.215343 kubelet[2504]: W0130 13:48:58.215340 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.215393 kubelet[2504]: E0130 13:48:58.215354 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.215581 kubelet[2504]: E0130 13:48:58.215566 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.215581 kubelet[2504]: W0130 13:48:58.215579 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.215634 kubelet[2504]: E0130 13:48:58.215596 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.215880 kubelet[2504]: E0130 13:48:58.215853 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.215880 kubelet[2504]: W0130 13:48:58.215869 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.215954 kubelet[2504]: E0130 13:48:58.215885 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.216197 kubelet[2504]: E0130 13:48:58.216130 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.216197 kubelet[2504]: W0130 13:48:58.216190 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.216281 kubelet[2504]: E0130 13:48:58.216207 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.216477 kubelet[2504]: E0130 13:48:58.216454 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.216477 kubelet[2504]: W0130 13:48:58.216467 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.216535 kubelet[2504]: E0130 13:48:58.216482 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:58.216722 kubelet[2504]: E0130 13:48:58.216699 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:58.216722 kubelet[2504]: W0130 13:48:58.216711 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:58.216722 kubelet[2504]: E0130 13:48:58.216719 2504 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:59.022643 kubelet[2504]: E0130 13:48:59.022592 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jx5hr" podUID="7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc" Jan 30 13:48:59.086160 containerd[1463]: time="2025-01-30T13:48:59.086088406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:59.087368 containerd[1463]: time="2025-01-30T13:48:59.087329500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:48:59.088538 containerd[1463]: time="2025-01-30T13:48:59.088509949Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:59.091243 containerd[1463]: time="2025-01-30T13:48:59.091201963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:59.091846 containerd[1463]: time="2025-01-30T13:48:59.091809430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.437753595s" Jan 30 13:48:59.091846 containerd[1463]: time="2025-01-30T13:48:59.091839717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:48:59.093639 containerd[1463]: time="2025-01-30T13:48:59.093594601Z" level=info msg="CreateContainer within sandbox \"0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:48:59.108022 containerd[1463]: time="2025-01-30T13:48:59.107985219Z" level=info msg="CreateContainer within sandbox \"0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d\"" Jan 30 13:48:59.108652 containerd[1463]: time="2025-01-30T13:48:59.108434808Z" level=info msg="StartContainer for \"cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d\"" Jan 30 13:48:59.138275 systemd[1]: Started cri-containerd-cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d.scope - libcontainer container cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d. Jan 30 13:48:59.167887 containerd[1463]: time="2025-01-30T13:48:59.167122523Z" level=info msg="StartContainer for \"cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d\" returns successfully" Jan 30 13:48:59.179634 systemd[1]: cri-containerd-cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d.scope: Deactivated successfully. Jan 30 13:48:59.185004 kubelet[2504]: E0130 13:48:59.184688 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:59.186683 kubelet[2504]: I0130 13:48:59.186594 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:48:59.188667 kubelet[2504]: E0130 13:48:59.188114 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:59.263607 containerd[1463]: time="2025-01-30T13:48:59.263472917Z" level=info msg="shim disconnected" id=cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d namespace=k8s.io Jan 30 13:48:59.263607 containerd[1463]: time="2025-01-30T13:48:59.263550333Z" level=warning msg="cleaning up after shim disconnected" id=cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d namespace=k8s.io Jan 30 13:48:59.263607 containerd[1463]: time="2025-01-30T13:48:59.263562706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:59.658646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb44ecc81ba34ac8d5239d8d99d3006b72f66fd1ec9f5b2419c641cea1355a6d-rootfs.mount: Deactivated successfully. Jan 30 13:49:00.187340 kubelet[2504]: E0130 13:49:00.187234 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:00.188381 containerd[1463]: time="2025-01-30T13:49:00.188297604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:49:01.022404 kubelet[2504]: E0130 13:49:01.022334 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jx5hr" podUID="7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc" Jan 30 13:49:03.022297 kubelet[2504]: E0130 13:49:03.022239 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jx5hr" podUID="7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc" Jan 30 13:49:04.149814 containerd[1463]: time="2025-01-30T13:49:04.149769837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:04.153467 containerd[1463]: time="2025-01-30T13:49:04.153397343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:49:04.154473 containerd[1463]: time="2025-01-30T13:49:04.154415553Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:04.158255 containerd[1463]: time="2025-01-30T13:49:04.158209905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:04.159697 containerd[1463]: time="2025-01-30T13:49:04.159572733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.971235174s" Jan 30 13:49:04.159697 containerd[1463]: time="2025-01-30T13:49:04.159614051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:49:04.163300 containerd[1463]: time="2025-01-30T13:49:04.163245415Z" level=info msg="CreateContainer within sandbox \"0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:49:04.177714 containerd[1463]: time="2025-01-30T13:49:04.177665354Z" level=info msg="CreateContainer within sandbox \"0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341\"" Jan 30 13:49:04.178762 containerd[1463]: time="2025-01-30T13:49:04.178636415Z" level=info msg="StartContainer for \"78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341\"" Jan 30 13:49:04.213323 systemd[1]: Started cri-containerd-78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341.scope - libcontainer container 78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341. Jan 30 13:49:04.248537 containerd[1463]: time="2025-01-30T13:49:04.248395829Z" level=info msg="StartContainer for \"78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341\" returns successfully" Jan 30 13:49:05.036591 kubelet[2504]: E0130 13:49:05.036483 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jx5hr" podUID="7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc" Jan 30 13:49:05.220503 containerd[1463]: time="2025-01-30T13:49:05.220403393Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:49:05.223841 systemd[1]: cri-containerd-78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341.scope: Deactivated successfully. Jan 30 13:49:05.235180 kubelet[2504]: E0130 13:49:05.235113 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:05.249403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341-rootfs.mount: Deactivated successfully. Jan 30 13:49:05.272933 kubelet[2504]: I0130 13:49:05.272741 2504 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:49:05.389164 systemd[1]: Created slice kubepods-burstable-pod9a799fbf_c296_42f5_b998_fc9dfc513713.slice - libcontainer container kubepods-burstable-pod9a799fbf_c296_42f5_b998_fc9dfc513713.slice. Jan 30 13:49:05.395778 systemd[1]: Created slice kubepods-burstable-poda4262e5b_77cf_4ec4_9ed8_8944d349be91.slice - libcontainer container kubepods-burstable-poda4262e5b_77cf_4ec4_9ed8_8944d349be91.slice. Jan 30 13:49:05.400370 systemd[1]: Created slice kubepods-besteffort-pod16c5035b_314e_415a_86ac_f74316cd7f64.slice - libcontainer container kubepods-besteffort-pod16c5035b_314e_415a_86ac_f74316cd7f64.slice. Jan 30 13:49:05.405119 systemd[1]: Created slice kubepods-besteffort-pode22f5f0e_705c_4efc_8f05_4e9e5c95bf68.slice - libcontainer container kubepods-besteffort-pode22f5f0e_705c_4efc_8f05_4e9e5c95bf68.slice. Jan 30 13:49:05.409828 systemd[1]: Created slice kubepods-besteffort-pod9d410147_17e9_453a_a04a_5d59f2f808df.slice - libcontainer container kubepods-besteffort-pod9d410147_17e9_453a_a04a_5d59f2f808df.slice. Jan 30 13:49:05.452100 containerd[1463]: time="2025-01-30T13:49:05.452012923Z" level=info msg="shim disconnected" id=78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341 namespace=k8s.io Jan 30 13:49:05.452100 containerd[1463]: time="2025-01-30T13:49:05.452073898Z" level=warning msg="cleaning up after shim disconnected" id=78ff3644827f4d6022df6dcb61a96c4e844685cc9feb51922a402c1299057341 namespace=k8s.io Jan 30 13:49:05.452100 containerd[1463]: time="2025-01-30T13:49:05.452083897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:49:05.464052 kubelet[2504]: I0130 13:49:05.464003 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4262e5b-77cf-4ec4-9ed8-8944d349be91-config-volume\") pod \"coredns-6f6b679f8f-6d9h6\" (UID: \"a4262e5b-77cf-4ec4-9ed8-8944d349be91\") " pod="kube-system/coredns-6f6b679f8f-6d9h6" Jan 30 13:49:05.464052 kubelet[2504]: I0130 13:49:05.464043 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xprxs\" (UniqueName: \"kubernetes.io/projected/a4262e5b-77cf-4ec4-9ed8-8944d349be91-kube-api-access-xprxs\") pod \"coredns-6f6b679f8f-6d9h6\" (UID: \"a4262e5b-77cf-4ec4-9ed8-8944d349be91\") " pod="kube-system/coredns-6f6b679f8f-6d9h6" Jan 30 13:49:05.464052 kubelet[2504]: I0130 13:49:05.464064 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6ht\" (UniqueName: \"kubernetes.io/projected/9d410147-17e9-453a-a04a-5d59f2f808df-kube-api-access-lg6ht\") pod \"calico-apiserver-84cd9657bc-8m9s5\" (UID: \"9d410147-17e9-453a-a04a-5d59f2f808df\") " pod="calico-apiserver/calico-apiserver-84cd9657bc-8m9s5" Jan 30 13:49:05.464313 kubelet[2504]: I0130 13:49:05.464084 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e22f5f0e-705c-4efc-8f05-4e9e5c95bf68-calico-apiserver-certs\") pod \"calico-apiserver-84cd9657bc-jd7pp\" (UID: \"e22f5f0e-705c-4efc-8f05-4e9e5c95bf68\") " pod="calico-apiserver/calico-apiserver-84cd9657bc-jd7pp" Jan 30 13:49:05.464313 kubelet[2504]: I0130 13:49:05.464103 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d410147-17e9-453a-a04a-5d59f2f808df-calico-apiserver-certs\") pod \"calico-apiserver-84cd9657bc-8m9s5\" (UID: \"9d410147-17e9-453a-a04a-5d59f2f808df\") " pod="calico-apiserver/calico-apiserver-84cd9657bc-8m9s5" Jan 30 13:49:05.464313 kubelet[2504]: I0130 13:49:05.464121 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjh2\" (UniqueName: \"kubernetes.io/projected/9a799fbf-c296-42f5-b998-fc9dfc513713-kube-api-access-zkjh2\") pod \"coredns-6f6b679f8f-hhlnc\" (UID: \"9a799fbf-c296-42f5-b998-fc9dfc513713\") " pod="kube-system/coredns-6f6b679f8f-hhlnc" Jan 30 13:49:05.464313 kubelet[2504]: I0130 13:49:05.464155 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c5035b-314e-415a-86ac-f74316cd7f64-tigera-ca-bundle\") pod \"calico-kube-controllers-c68cd766f-p9c29\" (UID: \"16c5035b-314e-415a-86ac-f74316cd7f64\") " pod="calico-system/calico-kube-controllers-c68cd766f-p9c29" Jan 30 13:49:05.464313 kubelet[2504]: I0130 13:49:05.464170 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpc7\" (UniqueName: \"kubernetes.io/projected/16c5035b-314e-415a-86ac-f74316cd7f64-kube-api-access-mtpc7\") pod \"calico-kube-controllers-c68cd766f-p9c29\" (UID: \"16c5035b-314e-415a-86ac-f74316cd7f64\") " pod="calico-system/calico-kube-controllers-c68cd766f-p9c29" Jan 30 13:49:05.464484 kubelet[2504]: I0130 13:49:05.464185 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46cn2\" (UniqueName: \"kubernetes.io/projected/e22f5f0e-705c-4efc-8f05-4e9e5c95bf68-kube-api-access-46cn2\") pod \"calico-apiserver-84cd9657bc-jd7pp\" (UID: \"e22f5f0e-705c-4efc-8f05-4e9e5c95bf68\") " pod="calico-apiserver/calico-apiserver-84cd9657bc-jd7pp" Jan 30 13:49:05.464484 kubelet[2504]: I0130 13:49:05.464199 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a799fbf-c296-42f5-b998-fc9dfc513713-config-volume\") pod \"coredns-6f6b679f8f-hhlnc\" (UID: \"9a799fbf-c296-42f5-b998-fc9dfc513713\") " pod="kube-system/coredns-6f6b679f8f-hhlnc" Jan 30 13:49:05.694423 kubelet[2504]: E0130 13:49:05.694265 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:05.695201 containerd[1463]: time="2025-01-30T13:49:05.695123574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhlnc,Uid:9a799fbf-c296-42f5-b998-fc9dfc513713,Namespace:kube-system,Attempt:0,}" Jan 30 13:49:05.698051 kubelet[2504]: E0130 13:49:05.698027 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:05.698552 containerd[1463]: time="2025-01-30T13:49:05.698514783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6d9h6,Uid:a4262e5b-77cf-4ec4-9ed8-8944d349be91,Namespace:kube-system,Attempt:0,}" Jan 30 13:49:05.703572 containerd[1463]: time="2025-01-30T13:49:05.703520054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c68cd766f-p9c29,Uid:16c5035b-314e-415a-86ac-f74316cd7f64,Namespace:calico-system,Attempt:0,}" Jan 30 13:49:05.708262 containerd[1463]: time="2025-01-30T13:49:05.708230479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cd9657bc-jd7pp,Uid:e22f5f0e-705c-4efc-8f05-4e9e5c95bf68,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:49:05.713045 containerd[1463]: time="2025-01-30T13:49:05.713002070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cd9657bc-8m9s5,Uid:9d410147-17e9-453a-a04a-5d59f2f808df,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:49:05.816634 containerd[1463]: time="2025-01-30T13:49:05.816570635Z" level=error msg="Failed to destroy network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.817245 containerd[1463]: time="2025-01-30T13:49:05.817222764Z" level=error msg="encountered an error cleaning up failed sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.817365 containerd[1463]: time="2025-01-30T13:49:05.817346226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6d9h6,Uid:a4262e5b-77cf-4ec4-9ed8-8944d349be91,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.817874 kubelet[2504]: E0130 13:49:05.817808 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.818555 kubelet[2504]: E0130 13:49:05.817916 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6d9h6" Jan 30 13:49:05.818555 kubelet[2504]: E0130 13:49:05.818042 2504 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6d9h6" Jan 30 13:49:05.818555 kubelet[2504]: E0130 13:49:05.818098 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6d9h6_kube-system(a4262e5b-77cf-4ec4-9ed8-8944d349be91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6d9h6_kube-system(a4262e5b-77cf-4ec4-9ed8-8944d349be91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6d9h6" podUID="a4262e5b-77cf-4ec4-9ed8-8944d349be91" Jan 30 13:49:05.827077 containerd[1463]: time="2025-01-30T13:49:05.827015906Z" level=error msg="Failed to destroy network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.827768 containerd[1463]: time="2025-01-30T13:49:05.827710254Z" level=error msg="encountered an error cleaning up failed sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.828666 containerd[1463]: time="2025-01-30T13:49:05.828621531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cd9657bc-8m9s5,Uid:9d410147-17e9-453a-a04a-5d59f2f808df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.829150 kubelet[2504]: E0130 13:49:05.829085 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.829214 kubelet[2504]: E0130 13:49:05.829158 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84cd9657bc-8m9s5" Jan 30 13:49:05.829214 kubelet[2504]: E0130 13:49:05.829178 2504 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84cd9657bc-8m9s5" Jan 30 13:49:05.829273 kubelet[2504]: E0130 13:49:05.829209 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84cd9657bc-8m9s5_calico-apiserver(9d410147-17e9-453a-a04a-5d59f2f808df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84cd9657bc-8m9s5_calico-apiserver(9d410147-17e9-453a-a04a-5d59f2f808df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84cd9657bc-8m9s5" podUID="9d410147-17e9-453a-a04a-5d59f2f808df" Jan 30 13:49:05.833551 containerd[1463]: time="2025-01-30T13:49:05.833276192Z" level=error msg="Failed to destroy network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.834257 containerd[1463]: time="2025-01-30T13:49:05.834224439Z" level=error msg="encountered an error cleaning up failed sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.834401 containerd[1463]: time="2025-01-30T13:49:05.834273731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhlnc,Uid:9a799fbf-c296-42f5-b998-fc9dfc513713,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.834546 kubelet[2504]: E0130 13:49:05.834449 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.834546 kubelet[2504]: E0130 13:49:05.834522 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hhlnc" Jan 30 13:49:05.834546 kubelet[2504]: E0130 13:49:05.834542 2504 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hhlnc" Jan 30 13:49:05.834654 kubelet[2504]: E0130 13:49:05.834579 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hhlnc_kube-system(9a799fbf-c296-42f5-b998-fc9dfc513713)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hhlnc_kube-system(9a799fbf-c296-42f5-b998-fc9dfc513713)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hhlnc" podUID="9a799fbf-c296-42f5-b998-fc9dfc513713" Jan 30 13:49:05.837720 containerd[1463]: time="2025-01-30T13:49:05.837683335Z" level=error msg="Failed to destroy network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.838097 containerd[1463]: time="2025-01-30T13:49:05.838068912Z" level=error msg="encountered an error cleaning up failed sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.838155 containerd[1463]: time="2025-01-30T13:49:05.838113616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cd9657bc-jd7pp,Uid:e22f5f0e-705c-4efc-8f05-4e9e5c95bf68,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.838387 kubelet[2504]: E0130 13:49:05.838351 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.838504 kubelet[2504]: E0130 13:49:05.838400 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84cd9657bc-jd7pp" Jan 30 13:49:05.838504 kubelet[2504]: E0130 13:49:05.838421 2504 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84cd9657bc-jd7pp" Jan 30 13:49:05.838504 kubelet[2504]: E0130 13:49:05.838458 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84cd9657bc-jd7pp_calico-apiserver(e22f5f0e-705c-4efc-8f05-4e9e5c95bf68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84cd9657bc-jd7pp_calico-apiserver(e22f5f0e-705c-4efc-8f05-4e9e5c95bf68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84cd9657bc-jd7pp" podUID="e22f5f0e-705c-4efc-8f05-4e9e5c95bf68" Jan 30 13:49:05.839521 containerd[1463]: time="2025-01-30T13:49:05.839479430Z" level=error msg="Failed to destroy network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.839910 containerd[1463]: time="2025-01-30T13:49:05.839885044Z" level=error msg="encountered an error cleaning up failed sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.839959 containerd[1463]: time="2025-01-30T13:49:05.839941400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c68cd766f-p9c29,Uid:16c5035b-314e-415a-86ac-f74316cd7f64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.840163 kubelet[2504]: E0130 13:49:05.840113 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:05.840195 kubelet[2504]: E0130 13:49:05.840170 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c68cd766f-p9c29" Jan 30 13:49:05.840195 kubelet[2504]: E0130 13:49:05.840186 2504 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c68cd766f-p9c29" Jan 30 13:49:05.840295 kubelet[2504]: E0130 13:49:05.840215 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c68cd766f-p9c29_calico-system(16c5035b-314e-415a-86ac-f74316cd7f64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c68cd766f-p9c29_calico-system(16c5035b-314e-415a-86ac-f74316cd7f64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c68cd766f-p9c29" podUID="16c5035b-314e-415a-86ac-f74316cd7f64" Jan 30 13:49:06.226196 kubelet[2504]: I0130 13:49:06.226163 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:06.227432 kubelet[2504]: I0130 13:49:06.227411 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:06.228227 containerd[1463]: time="2025-01-30T13:49:06.227801618Z" level=info msg="StopPodSandbox for \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\"" Jan 30 13:49:06.228227 containerd[1463]: time="2025-01-30T13:49:06.227853566Z" level=info msg="StopPodSandbox for \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\"" Jan 30 13:49:06.228227 containerd[1463]: time="2025-01-30T13:49:06.227966418Z" level=info msg="Ensure that sandbox 4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3 in task-service has been cleanup successfully" Jan 30 13:49:06.228227 containerd[1463]: time="2025-01-30T13:49:06.228024408Z" level=info msg="Ensure that sandbox 22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37 in task-service has been cleanup successfully" Jan 30 13:49:06.229949 kubelet[2504]: E0130 13:49:06.229909 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:06.230982 containerd[1463]: time="2025-01-30T13:49:06.230952874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:49:06.231980 kubelet[2504]: I0130 13:49:06.231128 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:06.232035 containerd[1463]: time="2025-01-30T13:49:06.231716202Z" level=info msg="StopPodSandbox for \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\"" Jan 30 13:49:06.232035 containerd[1463]: time="2025-01-30T13:49:06.231834375Z" level=info msg="Ensure that sandbox f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282 in task-service has been cleanup successfully" Jan 30 13:49:06.233414 kubelet[2504]: I0130 13:49:06.233384 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:06.234990 containerd[1463]: time="2025-01-30T13:49:06.234616896Z" level=info msg="StopPodSandbox for \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\"" Jan 30 13:49:06.234990 containerd[1463]: time="2025-01-30T13:49:06.234774001Z" level=info msg="Ensure that sandbox 80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207 in task-service has been cleanup successfully" Jan 30 13:49:06.242496 kubelet[2504]: I0130 13:49:06.242452 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:06.243353 containerd[1463]: time="2025-01-30T13:49:06.243303839Z" level=info msg="StopPodSandbox for \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\"" Jan 30 13:49:06.243831 containerd[1463]: time="2025-01-30T13:49:06.243811334Z" level=info msg="Ensure that sandbox be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa in task-service has been cleanup successfully" Jan 30 13:49:06.253872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa-shm.mount: Deactivated successfully. Jan 30 13:49:06.253986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37-shm.mount: Deactivated successfully. Jan 30 13:49:06.272295 containerd[1463]: time="2025-01-30T13:49:06.272146788Z" level=error msg="StopPodSandbox for \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\" failed" error="failed to destroy network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:06.274508 kubelet[2504]: E0130 13:49:06.274466 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:06.274610 kubelet[2504]: E0130 13:49:06.274531 2504 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3"} Jan 30 13:49:06.274610 kubelet[2504]: E0130 13:49:06.274606 2504 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e22f5f0e-705c-4efc-8f05-4e9e5c95bf68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:06.274735 kubelet[2504]: E0130 13:49:06.274632 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e22f5f0e-705c-4efc-8f05-4e9e5c95bf68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84cd9657bc-jd7pp" podUID="e22f5f0e-705c-4efc-8f05-4e9e5c95bf68" Jan 30 13:49:06.283347 containerd[1463]: time="2025-01-30T13:49:06.283303083Z" level=error msg="StopPodSandbox for \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\" failed" error="failed to destroy network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:06.283896 kubelet[2504]: E0130 13:49:06.283746 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:06.283896 kubelet[2504]: E0130 13:49:06.283805 2504 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282"} Jan 30 13:49:06.283896 kubelet[2504]: E0130 13:49:06.283841 2504 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d410147-17e9-453a-a04a-5d59f2f808df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:06.283896 kubelet[2504]: E0130 13:49:06.283865 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d410147-17e9-453a-a04a-5d59f2f808df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84cd9657bc-8m9s5" podUID="9d410147-17e9-453a-a04a-5d59f2f808df" Jan 30 13:49:06.287765 containerd[1463]: time="2025-01-30T13:49:06.287733387Z" level=error msg="StopPodSandbox for \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\" failed" error="failed to destroy network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:06.288028 kubelet[2504]: E0130 13:49:06.288001 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:06.288156 kubelet[2504]: E0130 13:49:06.288115 2504 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37"} Jan 30 13:49:06.288278 kubelet[2504]: E0130 13:49:06.288219 2504 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a799fbf-c296-42f5-b998-fc9dfc513713\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:06.288278 kubelet[2504]: E0130 13:49:06.288250 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a799fbf-c296-42f5-b998-fc9dfc513713\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hhlnc" podUID="9a799fbf-c296-42f5-b998-fc9dfc513713" Jan 30 13:49:06.292160 containerd[1463]: time="2025-01-30T13:49:06.292073181Z" level=error msg="StopPodSandbox for \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\" failed" error="failed to destroy network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:06.292589 kubelet[2504]: E0130 13:49:06.292557 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:06.292589 kubelet[2504]: E0130 13:49:06.292589 2504 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207"} Jan 30 13:49:06.292708 kubelet[2504]: E0130 13:49:06.292614 2504 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16c5035b-314e-415a-86ac-f74316cd7f64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:06.292708 kubelet[2504]: E0130 13:49:06.292635 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16c5035b-314e-415a-86ac-f74316cd7f64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c68cd766f-p9c29" podUID="16c5035b-314e-415a-86ac-f74316cd7f64" Jan 30 13:49:06.298266 containerd[1463]: time="2025-01-30T13:49:06.298221604Z" level=error msg="StopPodSandbox for \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\" failed" error="failed to destroy network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:06.298507 kubelet[2504]: E0130 13:49:06.298466 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:06.298507 kubelet[2504]: E0130 13:49:06.298500 2504 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa"} Jan 30 13:49:06.298681 kubelet[2504]: E0130 13:49:06.298525 2504 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4262e5b-77cf-4ec4-9ed8-8944d349be91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:06.298681 kubelet[2504]: E0130 13:49:06.298544 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4262e5b-77cf-4ec4-9ed8-8944d349be91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6d9h6" podUID="a4262e5b-77cf-4ec4-9ed8-8944d349be91" Jan 30 13:49:07.027640 systemd[1]: Created slice kubepods-besteffort-pod7cf29ca6_8cdc_4301_bd59_3bfcbeaaabcc.slice - libcontainer container kubepods-besteffort-pod7cf29ca6_8cdc_4301_bd59_3bfcbeaaabcc.slice. Jan 30 13:49:07.029836 containerd[1463]: time="2025-01-30T13:49:07.029792084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jx5hr,Uid:7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc,Namespace:calico-system,Attempt:0,}" Jan 30 13:49:07.089503 containerd[1463]: time="2025-01-30T13:49:07.089443273Z" level=error msg="Failed to destroy network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:07.089846 containerd[1463]: time="2025-01-30T13:49:07.089814252Z" level=error msg="encountered an error cleaning up failed sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:07.089894 containerd[1463]: time="2025-01-30T13:49:07.089873102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jx5hr,Uid:7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:07.090184 kubelet[2504]: E0130 13:49:07.090117 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:07.090325 kubelet[2504]: E0130 13:49:07.090207 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jx5hr" Jan 30 13:49:07.090325 kubelet[2504]: E0130 13:49:07.090229 2504 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jx5hr" Jan 30 13:49:07.090325 kubelet[2504]: E0130 13:49:07.090281 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jx5hr_calico-system(7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jx5hr_calico-system(7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jx5hr" podUID="7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc" Jan 30 13:49:07.091722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac-shm.mount: Deactivated successfully. Jan 30 13:49:07.244631 kubelet[2504]: I0130 13:49:07.244590 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:07.245242 containerd[1463]: time="2025-01-30T13:49:07.245205637Z" level=info msg="StopPodSandbox for \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\"" Jan 30 13:49:07.245730 containerd[1463]: time="2025-01-30T13:49:07.245702623Z" level=info msg="Ensure that sandbox 2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac in task-service has been cleanup successfully" Jan 30 13:49:07.271099 containerd[1463]: time="2025-01-30T13:49:07.271039495Z" level=error msg="StopPodSandbox for \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\" failed" error="failed to destroy network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:07.271381 kubelet[2504]: E0130 13:49:07.271335 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:07.271436 kubelet[2504]: E0130 13:49:07.271390 2504 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac"} Jan 30 13:49:07.271436 kubelet[2504]: E0130 13:49:07.271423 2504 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:07.271537 kubelet[2504]: E0130 13:49:07.271446 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jx5hr" podUID="7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc" Jan 30 13:49:07.794575 kubelet[2504]: I0130 13:49:07.794538 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:07.795188 kubelet[2504]: E0130 13:49:07.795172 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:08.246594 kubelet[2504]: E0130 13:49:08.246559 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:10.442378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387161722.mount: Deactivated successfully. Jan 30 13:49:11.002119 containerd[1463]: time="2025-01-30T13:49:11.002057191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:11.002979 containerd[1463]: time="2025-01-30T13:49:11.002943107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:49:11.004049 containerd[1463]: time="2025-01-30T13:49:11.004007540Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:11.006449 containerd[1463]: time="2025-01-30T13:49:11.006407435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:11.007074 containerd[1463]: time="2025-01-30T13:49:11.007021109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.776031456s" Jan 30 13:49:11.007074 containerd[1463]: time="2025-01-30T13:49:11.007068198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:49:11.015087 containerd[1463]: time="2025-01-30T13:49:11.015038481Z" level=info msg="CreateContainer within sandbox \"0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:49:11.033793 containerd[1463]: time="2025-01-30T13:49:11.033742609Z" level=info msg="CreateContainer within sandbox \"0079aeb152d6fa10002cee11fbfba5f8391f9505ec2287e6fa41d54f453ecbeb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ac63bcfdb8808339a289ad4c325e2d1df5682329721c96dff6c7dee28450902\"" Jan 30 13:49:11.034479 containerd[1463]: time="2025-01-30T13:49:11.034268819Z" level=info msg="StartContainer for \"9ac63bcfdb8808339a289ad4c325e2d1df5682329721c96dff6c7dee28450902\"" Jan 30 13:49:11.099274 systemd[1]: Started cri-containerd-9ac63bcfdb8808339a289ad4c325e2d1df5682329721c96dff6c7dee28450902.scope - libcontainer container 9ac63bcfdb8808339a289ad4c325e2d1df5682329721c96dff6c7dee28450902. Jan 30 13:49:11.203078 containerd[1463]: time="2025-01-30T13:49:11.203011837Z" level=info msg="StartContainer for \"9ac63bcfdb8808339a289ad4c325e2d1df5682329721c96dff6c7dee28450902\" returns successfully" Jan 30 13:49:11.234667 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:49:11.234841 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:49:11.257651 kubelet[2504]: E0130 13:49:11.257531 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:11.278747 kubelet[2504]: I0130 13:49:11.276537 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jv9qh" podStartSLOduration=1.5605915339999998 podStartE2EDuration="17.276512139s" podCreationTimestamp="2025-01-30 13:48:54 +0000 UTC" firstStartedPulling="2025-01-30 13:48:55.291759173 +0000 UTC m=+11.347009082" lastFinishedPulling="2025-01-30 13:49:11.007679778 +0000 UTC m=+27.062929687" observedRunningTime="2025-01-30 13:49:11.275961433 +0000 UTC m=+27.331211343" watchObservedRunningTime="2025-01-30 13:49:11.276512139 +0000 UTC m=+27.331762048" Jan 30 13:49:12.258842 kubelet[2504]: I0130 13:49:12.258805 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:12.259293 kubelet[2504]: E0130 13:49:12.259247 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:12.621180 kernel: bpftool[3838]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:49:12.845126 systemd-networkd[1392]: vxlan.calico: Link UP Jan 30 13:49:12.845158 systemd-networkd[1392]: vxlan.calico: Gained carrier Jan 30 13:49:14.105240 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:50024.service - OpenSSH per-connection server daemon (10.0.0.1:50024). Jan 30 13:49:14.146839 sshd[3915]: Accepted publickey for core from 10.0.0.1 port 50024 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:14.148549 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:14.153316 systemd-logind[1449]: New session 10 of user core. Jan 30 13:49:14.165296 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:49:14.305184 sshd[3915]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:14.309833 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:50024.service: Deactivated successfully. Jan 30 13:49:14.312492 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:49:14.313463 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:49:14.314629 systemd-logind[1449]: Removed session 10. Jan 30 13:49:14.635379 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Jan 30 13:49:17.023328 containerd[1463]: time="2025-01-30T13:49:17.023279338Z" level=info msg="StopPodSandbox for \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\"" Jan 30 13:49:17.023744 containerd[1463]: time="2025-01-30T13:49:17.023279438Z" level=info msg="StopPodSandbox for \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\"" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.070 [INFO][3964] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.070 [INFO][3964] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" iface="eth0" netns="/var/run/netns/cni-b7ed574f-f13f-76af-2241-af03b6d5e6b3" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.071 [INFO][3964] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" iface="eth0" netns="/var/run/netns/cni-b7ed574f-f13f-76af-2241-af03b6d5e6b3" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.071 [INFO][3964] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" iface="eth0" netns="/var/run/netns/cni-b7ed574f-f13f-76af-2241-af03b6d5e6b3" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.071 [INFO][3964] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.071 [INFO][3964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.127 [INFO][3979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.128 [INFO][3979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.128 [INFO][3979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.134 [WARNING][3979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.134 [INFO][3979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.135 [INFO][3979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:17.139644 containerd[1463]: 2025-01-30 13:49:17.137 [INFO][3964] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:17.140374 containerd[1463]: time="2025-01-30T13:49:17.139905245Z" level=info msg="TearDown network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\" successfully" Jan 30 13:49:17.140374 containerd[1463]: time="2025-01-30T13:49:17.139956412Z" level=info msg="StopPodSandbox for \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\" returns successfully" Jan 30 13:49:17.140577 kubelet[2504]: E0130 13:49:17.140552 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:17.141688 containerd[1463]: time="2025-01-30T13:49:17.141667067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6d9h6,Uid:a4262e5b-77cf-4ec4-9ed8-8944d349be91,Namespace:kube-system,Attempt:1,}" Jan 30 13:49:17.143378 systemd[1]: run-netns-cni\x2db7ed574f\x2df13f\x2d76af\x2d2241\x2daf03b6d5e6b3.mount: Deactivated successfully. Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.069 [INFO][3965] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.069 [INFO][3965] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" iface="eth0" netns="/var/run/netns/cni-2b961258-1c14-7950-9e4a-21fc098ba59b" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.070 [INFO][3965] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" iface="eth0" netns="/var/run/netns/cni-2b961258-1c14-7950-9e4a-21fc098ba59b" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.071 [INFO][3965] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" iface="eth0" netns="/var/run/netns/cni-2b961258-1c14-7950-9e4a-21fc098ba59b" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.071 [INFO][3965] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.071 [INFO][3965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.127 [INFO][3980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.128 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.135 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.139 [WARNING][3980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.139 [INFO][3980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.140 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:17.148398 containerd[1463]: 2025-01-30 13:49:17.146 [INFO][3965] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:17.148855 containerd[1463]: time="2025-01-30T13:49:17.148566252Z" level=info msg="TearDown network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\" successfully" Jan 30 13:49:17.148855 containerd[1463]: time="2025-01-30T13:49:17.148594374Z" level=info msg="StopPodSandbox for \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\" returns successfully" Jan 30 13:49:17.150787 systemd[1]: run-netns-cni\x2d2b961258\x2d1c14\x2d7950\x2d9e4a\x2d21fc098ba59b.mount: Deactivated successfully. Jan 30 13:49:17.154641 containerd[1463]: time="2025-01-30T13:49:17.154605822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c68cd766f-p9c29,Uid:16c5035b-314e-415a-86ac-f74316cd7f64,Namespace:calico-system,Attempt:1,}" Jan 30 13:49:17.262968 systemd-networkd[1392]: cali4aed67954a6: Link UP Jan 30 13:49:17.263625 systemd-networkd[1392]: cali4aed67954a6: Gained carrier Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.199 [INFO][3994] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0 coredns-6f6b679f8f- kube-system a4262e5b-77cf-4ec4-9ed8-8944d349be91 802 0 2025-01-30 13:48:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-6d9h6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4aed67954a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Namespace="kube-system" Pod="coredns-6f6b679f8f-6d9h6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6d9h6-" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.199 [INFO][3994] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Namespace="kube-system" Pod="coredns-6f6b679f8f-6d9h6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.229 [INFO][4020] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" HandleID="k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.236 [INFO][4020] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" HandleID="k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294320), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-6d9h6", "timestamp":"2025-01-30 13:49:17.229051513 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.236 [INFO][4020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.236 [INFO][4020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.236 [INFO][4020] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.238 [INFO][4020] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.241 [INFO][4020] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.244 [INFO][4020] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.246 [INFO][4020] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.247 [INFO][4020] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.247 [INFO][4020] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.248 [INFO][4020] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13 Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.253 [INFO][4020] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.258 [INFO][4020] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.258 [INFO][4020] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" host="localhost" Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.258 [INFO][4020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:17.276522 containerd[1463]: 2025-01-30 13:49:17.258 [INFO][4020] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" HandleID="k8s-pod-network.e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.277443 containerd[1463]: 2025-01-30 13:49:17.260 [INFO][3994] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Namespace="kube-system" Pod="coredns-6f6b679f8f-6d9h6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a4262e5b-77cf-4ec4-9ed8-8944d349be91", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-6d9h6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4aed67954a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:17.277443 containerd[1463]: 2025-01-30 13:49:17.260 [INFO][3994] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Namespace="kube-system" Pod="coredns-6f6b679f8f-6d9h6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.277443 containerd[1463]: 2025-01-30 13:49:17.260 [INFO][3994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4aed67954a6 ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Namespace="kube-system" Pod="coredns-6f6b679f8f-6d9h6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.277443 containerd[1463]: 2025-01-30 13:49:17.263 [INFO][3994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Namespace="kube-system" Pod="coredns-6f6b679f8f-6d9h6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.277443 containerd[1463]: 2025-01-30 13:49:17.264 [INFO][3994] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Namespace="kube-system" Pod="coredns-6f6b679f8f-6d9h6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a4262e5b-77cf-4ec4-9ed8-8944d349be91", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13", Pod:"coredns-6f6b679f8f-6d9h6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4aed67954a6", MAC:"02:12:69:42:40:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:17.277443 containerd[1463]: 2025-01-30 13:49:17.272 [INFO][3994] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13" Namespace="kube-system" Pod="coredns-6f6b679f8f-6d9h6" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:17.311868 containerd[1463]: time="2025-01-30T13:49:17.311781563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:17.311868 containerd[1463]: time="2025-01-30T13:49:17.311835424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:17.311868 containerd[1463]: time="2025-01-30T13:49:17.311856564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:17.312049 containerd[1463]: time="2025-01-30T13:49:17.311960660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:17.334251 systemd[1]: Started cri-containerd-e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13.scope - libcontainer container e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13. Jan 30 13:49:17.347757 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:49:17.368120 systemd-networkd[1392]: cali81cb2a19bf3: Link UP Jan 30 13:49:17.369594 systemd-networkd[1392]: cali81cb2a19bf3: Gained carrier Jan 30 13:49:17.388959 containerd[1463]: time="2025-01-30T13:49:17.388902723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6d9h6,Uid:a4262e5b-77cf-4ec4-9ed8-8944d349be91,Namespace:kube-system,Attempt:1,} returns sandbox id \"e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13\"" Jan 30 13:49:17.389629 kubelet[2504]: E0130 13:49:17.389596 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:17.391258 containerd[1463]: time="2025-01-30T13:49:17.391225137Z" level=info msg="CreateContainer within sandbox \"e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.205 [INFO][4005] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0 calico-kube-controllers-c68cd766f- calico-system 16c5035b-314e-415a-86ac-f74316cd7f64 801 0 2025-01-30 13:48:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c68cd766f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c68cd766f-p9c29 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali81cb2a19bf3 [] []}} ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Namespace="calico-system" Pod="calico-kube-controllers-c68cd766f-p9c29" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.205 [INFO][4005] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Namespace="calico-system" Pod="calico-kube-controllers-c68cd766f-p9c29" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.230 [INFO][4025] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" HandleID="k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.237 [INFO][4025] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" HandleID="k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002416f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c68cd766f-p9c29", "timestamp":"2025-01-30 13:49:17.230583683 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.237 [INFO][4025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.258 [INFO][4025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.258 [INFO][4025] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.338 [INFO][4025] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.342 [INFO][4025] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.346 [INFO][4025] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.348 [INFO][4025] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.350 [INFO][4025] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.350 [INFO][4025] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.351 [INFO][4025] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1 Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.355 [INFO][4025] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.360 [INFO][4025] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.360 [INFO][4025] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" host="localhost" Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.360 [INFO][4025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:17.407740 containerd[1463]: 2025-01-30 13:49:17.360 [INFO][4025] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" HandleID="k8s-pod-network.194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.409310 containerd[1463]: 2025-01-30 13:49:17.364 [INFO][4005] cni-plugin/k8s.go 386: Populated endpoint ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Namespace="calico-system" Pod="calico-kube-controllers-c68cd766f-p9c29" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0", GenerateName:"calico-kube-controllers-c68cd766f-", Namespace:"calico-system", SelfLink:"", UID:"16c5035b-314e-415a-86ac-f74316cd7f64", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c68cd766f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c68cd766f-p9c29", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81cb2a19bf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:17.409310 containerd[1463]: 2025-01-30 13:49:17.364 [INFO][4005] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Namespace="calico-system" Pod="calico-kube-controllers-c68cd766f-p9c29" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.409310 containerd[1463]: 2025-01-30 13:49:17.365 [INFO][4005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81cb2a19bf3 ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Namespace="calico-system" Pod="calico-kube-controllers-c68cd766f-p9c29" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.409310 containerd[1463]: 2025-01-30 13:49:17.370 [INFO][4005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Namespace="calico-system" Pod="calico-kube-controllers-c68cd766f-p9c29" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.409310 containerd[1463]: 2025-01-30 13:49:17.371 [INFO][4005] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Namespace="calico-system" Pod="calico-kube-controllers-c68cd766f-p9c29" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0", GenerateName:"calico-kube-controllers-c68cd766f-", Namespace:"calico-system", SelfLink:"", UID:"16c5035b-314e-415a-86ac-f74316cd7f64", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c68cd766f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1", Pod:"calico-kube-controllers-c68cd766f-p9c29", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81cb2a19bf3", MAC:"fe:eb:94:b5:01:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:17.409310 containerd[1463]: 2025-01-30 13:49:17.404 [INFO][4005] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1" Namespace="calico-system" Pod="calico-kube-controllers-c68cd766f-p9c29" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:17.418758 containerd[1463]: time="2025-01-30T13:49:17.418707593Z" level=info msg="CreateContainer within sandbox \"e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0115c782bcfca1de6aced81d78d84ccd9b529a67b133371b6d4c9b845198b527\"" Jan 30 13:49:17.419668 containerd[1463]: time="2025-01-30T13:49:17.419645977Z" level=info msg="StartContainer for \"0115c782bcfca1de6aced81d78d84ccd9b529a67b133371b6d4c9b845198b527\"" Jan 30 13:49:17.435580 containerd[1463]: time="2025-01-30T13:49:17.435418297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:17.435580 containerd[1463]: time="2025-01-30T13:49:17.435492416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:17.435580 containerd[1463]: time="2025-01-30T13:49:17.435512383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:17.436425 containerd[1463]: time="2025-01-30T13:49:17.436322416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:17.448301 systemd[1]: Started cri-containerd-0115c782bcfca1de6aced81d78d84ccd9b529a67b133371b6d4c9b845198b527.scope - libcontainer container 0115c782bcfca1de6aced81d78d84ccd9b529a67b133371b6d4c9b845198b527. Jan 30 13:49:17.451793 systemd[1]: Started cri-containerd-194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1.scope - libcontainer container 194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1. Jan 30 13:49:17.464536 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:49:17.479905 containerd[1463]: time="2025-01-30T13:49:17.479807645Z" level=info msg="StartContainer for \"0115c782bcfca1de6aced81d78d84ccd9b529a67b133371b6d4c9b845198b527\" returns successfully" Jan 30 13:49:17.493275 containerd[1463]: time="2025-01-30T13:49:17.493242793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c68cd766f-p9c29,Uid:16c5035b-314e-415a-86ac-f74316cd7f64,Namespace:calico-system,Attempt:1,} returns sandbox id \"194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1\"" Jan 30 13:49:17.494792 containerd[1463]: time="2025-01-30T13:49:17.494625801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:49:18.022707 containerd[1463]: time="2025-01-30T13:49:18.022628231Z" level=info msg="StopPodSandbox for \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\"" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.059 [INFO][4202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.059 [INFO][4202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" iface="eth0" netns="/var/run/netns/cni-1996b858-e5d3-a7c2-613d-c01167c73f97" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.059 [INFO][4202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" iface="eth0" netns="/var/run/netns/cni-1996b858-e5d3-a7c2-613d-c01167c73f97" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.060 [INFO][4202] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" iface="eth0" netns="/var/run/netns/cni-1996b858-e5d3-a7c2-613d-c01167c73f97" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.060 [INFO][4202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.060 [INFO][4202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.083 [INFO][4209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.083 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.083 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.087 [WARNING][4209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.087 [INFO][4209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.088 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:18.092868 containerd[1463]: 2025-01-30 13:49:18.090 [INFO][4202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:18.093643 containerd[1463]: time="2025-01-30T13:49:18.093034202Z" level=info msg="TearDown network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\" successfully" Jan 30 13:49:18.093643 containerd[1463]: time="2025-01-30T13:49:18.093063778Z" level=info msg="StopPodSandbox for \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\" returns successfully" Jan 30 13:49:18.093716 containerd[1463]: time="2025-01-30T13:49:18.093663946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cd9657bc-8m9s5,Uid:9d410147-17e9-453a-a04a-5d59f2f808df,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:49:18.148827 systemd[1]: run-netns-cni\x2d1996b858\x2de5d3\x2da7c2\x2d613d\x2dc01167c73f97.mount: Deactivated successfully. Jan 30 13:49:18.191548 systemd-networkd[1392]: calidd41e7c840b: Link UP Jan 30 13:49:18.196236 systemd-networkd[1392]: calidd41e7c840b: Gained carrier Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.130 [INFO][4216] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0 calico-apiserver-84cd9657bc- calico-apiserver 9d410147-17e9-453a-a04a-5d59f2f808df 821 0 2025-01-30 13:48:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84cd9657bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84cd9657bc-8m9s5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd41e7c840b [] []}} ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-8m9s5" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.130 [INFO][4216] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-8m9s5" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.161 [INFO][4231] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" HandleID="k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.168 [INFO][4231] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" HandleID="k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050ce0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84cd9657bc-8m9s5", "timestamp":"2025-01-30 13:49:18.161386075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.168 [INFO][4231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.168 [INFO][4231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.168 [INFO][4231] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.169 [INFO][4231] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.172 [INFO][4231] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.175 [INFO][4231] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.176 [INFO][4231] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.178 [INFO][4231] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.178 [INFO][4231] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.179 [INFO][4231] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7 Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.182 [INFO][4231] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.186 [INFO][4231] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.186 [INFO][4231] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" host="localhost" Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.186 [INFO][4231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:18.203419 containerd[1463]: 2025-01-30 13:49:18.186 [INFO][4231] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" HandleID="k8s-pod-network.a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.204404 containerd[1463]: 2025-01-30 13:49:18.189 [INFO][4216] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-8m9s5" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0", GenerateName:"calico-apiserver-84cd9657bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d410147-17e9-453a-a04a-5d59f2f808df", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cd9657bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84cd9657bc-8m9s5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd41e7c840b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:18.204404 containerd[1463]: 2025-01-30 13:49:18.189 [INFO][4216] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-8m9s5" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.204404 containerd[1463]: 2025-01-30 13:49:18.189 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd41e7c840b ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-8m9s5" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.204404 containerd[1463]: 2025-01-30 13:49:18.192 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-8m9s5" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.204404 containerd[1463]: 2025-01-30 13:49:18.192 [INFO][4216] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-8m9s5" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0", GenerateName:"calico-apiserver-84cd9657bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d410147-17e9-453a-a04a-5d59f2f808df", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cd9657bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7", Pod:"calico-apiserver-84cd9657bc-8m9s5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd41e7c840b", MAC:"de:db:79:8a:6e:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:18.204404 containerd[1463]: 2025-01-30 13:49:18.200 [INFO][4216] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-8m9s5" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:18.222809 containerd[1463]: time="2025-01-30T13:49:18.222708297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:18.222809 containerd[1463]: time="2025-01-30T13:49:18.222787356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:18.222809 containerd[1463]: time="2025-01-30T13:49:18.222801071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:18.222985 containerd[1463]: time="2025-01-30T13:49:18.222886883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:18.246294 systemd[1]: Started cri-containerd-a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7.scope - libcontainer container a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7. Jan 30 13:49:18.258866 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:49:18.281393 kubelet[2504]: E0130 13:49:18.280600 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:18.287330 containerd[1463]: time="2025-01-30T13:49:18.287291938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cd9657bc-8m9s5,Uid:9d410147-17e9-453a-a04a-5d59f2f808df,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7\"" Jan 30 13:49:18.290880 kubelet[2504]: I0130 13:49:18.290074 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6d9h6" podStartSLOduration=29.290056883 podStartE2EDuration="29.290056883s" podCreationTimestamp="2025-01-30 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:18.290033229 +0000 UTC m=+34.345283158" watchObservedRunningTime="2025-01-30 13:49:18.290056883 +0000 UTC m=+34.345306802" Jan 30 13:49:18.603272 systemd-networkd[1392]: cali4aed67954a6: Gained IPv6LL Jan 30 13:49:19.136788 containerd[1463]: time="2025-01-30T13:49:19.136730902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:19.137517 containerd[1463]: time="2025-01-30T13:49:19.137475912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:49:19.138736 containerd[1463]: time="2025-01-30T13:49:19.138699440Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:19.140688 containerd[1463]: time="2025-01-30T13:49:19.140650656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:19.141218 containerd[1463]: time="2025-01-30T13:49:19.141177646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.646526427s" Jan 30 13:49:19.141218 containerd[1463]: time="2025-01-30T13:49:19.141205979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:49:19.142091 containerd[1463]: time="2025-01-30T13:49:19.142065434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:49:19.148757 containerd[1463]: time="2025-01-30T13:49:19.148581075Z" level=info msg="CreateContainer within sandbox \"194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:49:19.162698 containerd[1463]: time="2025-01-30T13:49:19.162658693Z" level=info msg="CreateContainer within sandbox \"194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2e8b5e54949b25b513816a803cefcf649726a90b9561707c0c93e9729d8d7889\"" Jan 30 13:49:19.163093 containerd[1463]: time="2025-01-30T13:49:19.163072080Z" level=info msg="StartContainer for \"2e8b5e54949b25b513816a803cefcf649726a90b9561707c0c93e9729d8d7889\"" Jan 30 13:49:19.195283 systemd[1]: Started cri-containerd-2e8b5e54949b25b513816a803cefcf649726a90b9561707c0c93e9729d8d7889.scope - libcontainer container 2e8b5e54949b25b513816a803cefcf649726a90b9561707c0c93e9729d8d7889. Jan 30 13:49:19.232394 containerd[1463]: time="2025-01-30T13:49:19.232345034Z" level=info msg="StartContainer for \"2e8b5e54949b25b513816a803cefcf649726a90b9561707c0c93e9729d8d7889\" returns successfully" Jan 30 13:49:19.243347 systemd-networkd[1392]: cali81cb2a19bf3: Gained IPv6LL Jan 30 13:49:19.288321 kubelet[2504]: E0130 13:49:19.288117 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:19.300719 kubelet[2504]: I0130 13:49:19.300642 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c68cd766f-p9c29" podStartSLOduration=22.653175574 podStartE2EDuration="24.300623131s" podCreationTimestamp="2025-01-30 13:48:55 +0000 UTC" firstStartedPulling="2025-01-30 13:49:17.494415436 +0000 UTC m=+33.549665335" lastFinishedPulling="2025-01-30 13:49:19.141862983 +0000 UTC m=+35.197112892" observedRunningTime="2025-01-30 13:49:19.300294473 +0000 UTC m=+35.355544382" watchObservedRunningTime="2025-01-30 13:49:19.300623131 +0000 UTC m=+35.355873040" Jan 30 13:49:19.316756 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:50030.service - OpenSSH per-connection server daemon (10.0.0.1:50030). Jan 30 13:49:19.356288 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:19.358077 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:19.361963 systemd-logind[1449]: New session 11 of user core. Jan 30 13:49:19.370266 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:49:19.493293 sshd[4345]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:19.497823 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:50030.service: Deactivated successfully. Jan 30 13:49:19.500384 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:49:19.501103 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:49:19.502026 systemd-logind[1449]: Removed session 11. Jan 30 13:49:19.563358 systemd-networkd[1392]: calidd41e7c840b: Gained IPv6LL Jan 30 13:49:20.023170 containerd[1463]: time="2025-01-30T13:49:20.023076530Z" level=info msg="StopPodSandbox for \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\"" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.063 [INFO][4375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.064 [INFO][4375] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" iface="eth0" netns="/var/run/netns/cni-7e076bf5-addc-9a0e-f199-780396bc9093" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.064 [INFO][4375] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" iface="eth0" netns="/var/run/netns/cni-7e076bf5-addc-9a0e-f199-780396bc9093" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.064 [INFO][4375] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" iface="eth0" netns="/var/run/netns/cni-7e076bf5-addc-9a0e-f199-780396bc9093" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.064 [INFO][4375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.064 [INFO][4375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.087 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.087 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.087 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.092 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.092 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.093 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:20.099194 containerd[1463]: 2025-01-30 13:49:20.096 [INFO][4375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:20.099637 containerd[1463]: time="2025-01-30T13:49:20.099333579Z" level=info msg="TearDown network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\" successfully" Jan 30 13:49:20.099637 containerd[1463]: time="2025-01-30T13:49:20.099359537Z" level=info msg="StopPodSandbox for \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\" returns successfully" Jan 30 13:49:20.100005 containerd[1463]: time="2025-01-30T13:49:20.099960757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jx5hr,Uid:7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc,Namespace:calico-system,Attempt:1,}" Jan 30 13:49:20.151522 systemd[1]: run-netns-cni\x2d7e076bf5\x2daddc\x2d9a0e\x2df199\x2d780396bc9093.mount: Deactivated successfully. Jan 30 13:49:20.213510 systemd-networkd[1392]: calidedebaf3bb8: Link UP Jan 30 13:49:20.213734 systemd-networkd[1392]: calidedebaf3bb8: Gained carrier Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.143 [INFO][4390] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jx5hr-eth0 csi-node-driver- calico-system 7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc 858 0 2025-01-30 13:48:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jx5hr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidedebaf3bb8 [] []}} ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Namespace="calico-system" Pod="csi-node-driver-jx5hr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jx5hr-" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.143 [INFO][4390] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Namespace="calico-system" Pod="csi-node-driver-jx5hr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.175 [INFO][4403] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" HandleID="k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.183 [INFO][4403] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" HandleID="k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030aed0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jx5hr", "timestamp":"2025-01-30 13:49:20.175126425 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.183 [INFO][4403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.183 [INFO][4403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.183 [INFO][4403] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.185 [INFO][4403] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.189 [INFO][4403] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.193 [INFO][4403] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.195 [INFO][4403] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.196 [INFO][4403] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.197 [INFO][4403] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.199 [INFO][4403] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.202 [INFO][4403] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.208 [INFO][4403] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.208 [INFO][4403] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" host="localhost" Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.208 [INFO][4403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:20.225573 containerd[1463]: 2025-01-30 13:49:20.208 [INFO][4403] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" HandleID="k8s-pod-network.cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.226354 containerd[1463]: 2025-01-30 13:49:20.211 [INFO][4390] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Namespace="calico-system" Pod="csi-node-driver-jx5hr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jx5hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jx5hr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jx5hr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidedebaf3bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:20.226354 containerd[1463]: 2025-01-30 13:49:20.211 [INFO][4390] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Namespace="calico-system" Pod="csi-node-driver-jx5hr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.226354 containerd[1463]: 2025-01-30 13:49:20.211 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidedebaf3bb8 ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Namespace="calico-system" Pod="csi-node-driver-jx5hr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.226354 containerd[1463]: 2025-01-30 13:49:20.213 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Namespace="calico-system" Pod="csi-node-driver-jx5hr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.226354 containerd[1463]: 2025-01-30 13:49:20.213 [INFO][4390] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Namespace="calico-system" Pod="csi-node-driver-jx5hr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jx5hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jx5hr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e", Pod:"csi-node-driver-jx5hr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidedebaf3bb8", MAC:"0e:63:34:16:23:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:20.226354 containerd[1463]: 2025-01-30 13:49:20.222 [INFO][4390] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e" Namespace="calico-system" Pod="csi-node-driver-jx5hr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:20.247409 containerd[1463]: time="2025-01-30T13:49:20.246582245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:20.247409 containerd[1463]: time="2025-01-30T13:49:20.247389211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:20.247409 containerd[1463]: time="2025-01-30T13:49:20.247407225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:20.247640 containerd[1463]: time="2025-01-30T13:49:20.247502063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:20.275333 systemd[1]: Started cri-containerd-cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e.scope - libcontainer container cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e. Jan 30 13:49:20.288759 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:49:20.290340 kubelet[2504]: I0130 13:49:20.290294 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:20.290996 kubelet[2504]: E0130 13:49:20.290694 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:20.301652 containerd[1463]: time="2025-01-30T13:49:20.301608952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jx5hr,Uid:7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e\"" Jan 30 13:49:21.023229 containerd[1463]: time="2025-01-30T13:49:21.022917092Z" level=info msg="StopPodSandbox for \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\"" Jan 30 13:49:21.023229 containerd[1463]: time="2025-01-30T13:49:21.022917252Z" level=info msg="StopPodSandbox for \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\"" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.085 [INFO][4500] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.085 [INFO][4500] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" iface="eth0" netns="/var/run/netns/cni-f53e95c4-1185-6f93-e1ba-83994d84d861" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.085 [INFO][4500] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" iface="eth0" netns="/var/run/netns/cni-f53e95c4-1185-6f93-e1ba-83994d84d861" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.086 [INFO][4500] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" iface="eth0" netns="/var/run/netns/cni-f53e95c4-1185-6f93-e1ba-83994d84d861" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.086 [INFO][4500] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.086 [INFO][4500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.109 [INFO][4518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.109 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.109 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.115 [WARNING][4518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.115 [INFO][4518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.116 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:21.122492 containerd[1463]: 2025-01-30 13:49:21.119 [INFO][4500] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:21.123576 containerd[1463]: time="2025-01-30T13:49:21.122931216Z" level=info msg="TearDown network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\" successfully" Jan 30 13:49:21.123576 containerd[1463]: time="2025-01-30T13:49:21.122956824Z" level=info msg="StopPodSandbox for \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\" returns successfully" Jan 30 13:49:21.126185 kubelet[2504]: E0130 13:49:21.124113 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:21.126308 containerd[1463]: time="2025-01-30T13:49:21.124854829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhlnc,Uid:9a799fbf-c296-42f5-b998-fc9dfc513713,Namespace:kube-system,Attempt:1,}" Jan 30 13:49:21.127056 systemd[1]: run-netns-cni\x2df53e95c4\x2d1185\x2d6f93\x2de1ba\x2d83994d84d861.mount: Deactivated successfully. Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.083 [INFO][4504] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.083 [INFO][4504] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" iface="eth0" netns="/var/run/netns/cni-f5f1bacf-f93e-be2f-4ed0-23b80413acd1" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.085 [INFO][4504] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" iface="eth0" netns="/var/run/netns/cni-f5f1bacf-f93e-be2f-4ed0-23b80413acd1" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.086 [INFO][4504] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" iface="eth0" netns="/var/run/netns/cni-f5f1bacf-f93e-be2f-4ed0-23b80413acd1" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.086 [INFO][4504] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.086 [INFO][4504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.112 [INFO][4519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.112 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.116 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.121 [WARNING][4519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.121 [INFO][4519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.125 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:21.131889 containerd[1463]: 2025-01-30 13:49:21.129 [INFO][4504] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:21.132723 containerd[1463]: time="2025-01-30T13:49:21.132670359Z" level=info msg="TearDown network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\" successfully" Jan 30 13:49:21.132723 containerd[1463]: time="2025-01-30T13:49:21.132713550Z" level=info msg="StopPodSandbox for \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\" returns successfully" Jan 30 13:49:21.133586 containerd[1463]: time="2025-01-30T13:49:21.133546635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cd9657bc-jd7pp,Uid:e22f5f0e-705c-4efc-8f05-4e9e5c95bf68,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:49:21.135233 systemd[1]: run-netns-cni\x2df5f1bacf\x2df93e\x2dbe2f\x2d4ed0\x2d23b80413acd1.mount: Deactivated successfully. Jan 30 13:49:21.254528 systemd-networkd[1392]: calid955b39eecb: Link UP Jan 30 13:49:21.255204 systemd-networkd[1392]: calid955b39eecb: Gained carrier Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.178 [INFO][4533] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0 coredns-6f6b679f8f- kube-system 9a799fbf-c296-42f5-b998-fc9dfc513713 872 0 2025-01-30 13:48:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hhlnc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid955b39eecb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhlnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhlnc-" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.178 [INFO][4533] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhlnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.208 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" HandleID="k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.219 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" HandleID="k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hhlnc", "timestamp":"2025-01-30 13:49:21.208883899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.219 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.219 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.219 [INFO][4561] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.221 [INFO][4561] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.226 [INFO][4561] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.231 [INFO][4561] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.233 [INFO][4561] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.236 [INFO][4561] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.236 [INFO][4561] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.238 [INFO][4561] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.243 [INFO][4561] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.249 [INFO][4561] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.249 [INFO][4561] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" host="localhost" Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.249 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:21.269751 containerd[1463]: 2025-01-30 13:49:21.249 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" HandleID="k8s-pod-network.e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.270820 containerd[1463]: 2025-01-30 13:49:21.252 [INFO][4533] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhlnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9a799fbf-c296-42f5-b998-fc9dfc513713", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hhlnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid955b39eecb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:21.270820 containerd[1463]: 2025-01-30 13:49:21.252 [INFO][4533] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhlnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.270820 containerd[1463]: 2025-01-30 13:49:21.252 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid955b39eecb ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhlnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.270820 containerd[1463]: 2025-01-30 13:49:21.254 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhlnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.270820 containerd[1463]: 2025-01-30 13:49:21.255 [INFO][4533] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhlnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9a799fbf-c296-42f5-b998-fc9dfc513713", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd", Pod:"coredns-6f6b679f8f-hhlnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid955b39eecb", MAC:"da:30:1a:e8:97:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:21.270820 containerd[1463]: 2025-01-30 13:49:21.263 [INFO][4533] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhlnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:21.301818 containerd[1463]: time="2025-01-30T13:49:21.301748174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:21.301967 containerd[1463]: time="2025-01-30T13:49:21.301808417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:21.301967 containerd[1463]: time="2025-01-30T13:49:21.301820890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:21.301967 containerd[1463]: time="2025-01-30T13:49:21.301917382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:21.330352 systemd[1]: Started cri-containerd-e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd.scope - libcontainer container e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd. Jan 30 13:49:21.349095 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:49:21.360337 systemd-networkd[1392]: calic652b05ae66: Link UP Jan 30 13:49:21.361555 systemd-networkd[1392]: calic652b05ae66: Gained carrier Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.201 [INFO][4544] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0 calico-apiserver-84cd9657bc- calico-apiserver e22f5f0e-705c-4efc-8f05-4e9e5c95bf68 873 0 2025-01-30 13:48:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84cd9657bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84cd9657bc-jd7pp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic652b05ae66 [] []}} ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-jd7pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.202 [INFO][4544] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-jd7pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.234 [INFO][4570] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" HandleID="k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.241 [INFO][4570] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" HandleID="k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84cd9657bc-jd7pp", "timestamp":"2025-01-30 13:49:21.234254893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.241 [INFO][4570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.249 [INFO][4570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.249 [INFO][4570] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.323 [INFO][4570] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.329 [INFO][4570] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.336 [INFO][4570] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.338 [INFO][4570] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.341 [INFO][4570] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.342 [INFO][4570] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.343 [INFO][4570] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119 Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.346 [INFO][4570] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.351 [INFO][4570] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.351 [INFO][4570] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" host="localhost" Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.351 [INFO][4570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:21.379622 containerd[1463]: 2025-01-30 13:49:21.351 [INFO][4570] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" HandleID="k8s-pod-network.7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.380174 containerd[1463]: 2025-01-30 13:49:21.355 [INFO][4544] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-jd7pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0", GenerateName:"calico-apiserver-84cd9657bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e22f5f0e-705c-4efc-8f05-4e9e5c95bf68", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cd9657bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84cd9657bc-jd7pp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic652b05ae66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:21.380174 containerd[1463]: 2025-01-30 13:49:21.356 [INFO][4544] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-jd7pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.380174 containerd[1463]: 2025-01-30 13:49:21.356 [INFO][4544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic652b05ae66 ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-jd7pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.380174 containerd[1463]: 2025-01-30 13:49:21.362 [INFO][4544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-jd7pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.380174 containerd[1463]: 2025-01-30 13:49:21.362 [INFO][4544] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-jd7pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0", GenerateName:"calico-apiserver-84cd9657bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e22f5f0e-705c-4efc-8f05-4e9e5c95bf68", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cd9657bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119", Pod:"calico-apiserver-84cd9657bc-jd7pp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic652b05ae66", MAC:"8a:4a:ea:d1:27:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:21.380174 containerd[1463]: 2025-01-30 13:49:21.374 [INFO][4544] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119" Namespace="calico-apiserver" Pod="calico-apiserver-84cd9657bc-jd7pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:21.385428 containerd[1463]: time="2025-01-30T13:49:21.385299527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhlnc,Uid:9a799fbf-c296-42f5-b998-fc9dfc513713,Namespace:kube-system,Attempt:1,} returns sandbox id \"e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd\"" Jan 30 13:49:21.386328 kubelet[2504]: E0130 13:49:21.386223 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:21.390716 containerd[1463]: time="2025-01-30T13:49:21.390467434Z" level=info msg="CreateContainer within sandbox \"e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:49:21.407708 containerd[1463]: time="2025-01-30T13:49:21.406219600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:21.407708 containerd[1463]: time="2025-01-30T13:49:21.406278290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:21.407708 containerd[1463]: time="2025-01-30T13:49:21.406298839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:21.407708 containerd[1463]: time="2025-01-30T13:49:21.406894668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:21.410652 containerd[1463]: time="2025-01-30T13:49:21.410619134Z" level=info msg="CreateContainer within sandbox \"e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70d0f50d9b88ab52907aef9450752c98758e9366cf8c371ecd44f1eaa98308c9\"" Jan 30 13:49:21.411655 containerd[1463]: time="2025-01-30T13:49:21.411612219Z" level=info msg="StartContainer for \"70d0f50d9b88ab52907aef9450752c98758e9366cf8c371ecd44f1eaa98308c9\"" Jan 30 13:49:21.431274 systemd[1]: Started cri-containerd-7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119.scope - libcontainer container 7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119. Jan 30 13:49:21.451338 systemd[1]: Started cri-containerd-70d0f50d9b88ab52907aef9450752c98758e9366cf8c371ecd44f1eaa98308c9.scope - libcontainer container 70d0f50d9b88ab52907aef9450752c98758e9366cf8c371ecd44f1eaa98308c9. Jan 30 13:49:21.455657 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:49:21.481307 containerd[1463]: time="2025-01-30T13:49:21.481263865Z" level=info msg="StartContainer for \"70d0f50d9b88ab52907aef9450752c98758e9366cf8c371ecd44f1eaa98308c9\" returns successfully" Jan 30 13:49:21.498502 containerd[1463]: time="2025-01-30T13:49:21.498467778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cd9657bc-jd7pp,Uid:e22f5f0e-705c-4efc-8f05-4e9e5c95bf68,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119\"" Jan 30 13:49:21.688924 containerd[1463]: time="2025-01-30T13:49:21.688784876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:21.690063 containerd[1463]: time="2025-01-30T13:49:21.690021239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:49:21.691529 containerd[1463]: time="2025-01-30T13:49:21.691486922Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:21.694208 containerd[1463]: time="2025-01-30T13:49:21.694169500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:21.694853 containerd[1463]: time="2025-01-30T13:49:21.694811406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.552717709s" Jan 30 13:49:21.694885 containerd[1463]: time="2025-01-30T13:49:21.694851010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:49:21.696023 containerd[1463]: time="2025-01-30T13:49:21.695954383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:49:21.697403 containerd[1463]: time="2025-01-30T13:49:21.697286886Z" level=info msg="CreateContainer within sandbox \"a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:49:21.717816 containerd[1463]: time="2025-01-30T13:49:21.717721938Z" level=info msg="CreateContainer within sandbox \"a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e383607fe94e86d0ec157d50e91dd947fb7c21b3e0e3c3f73c4fe37ad90e0c97\"" Jan 30 13:49:21.718506 containerd[1463]: time="2025-01-30T13:49:21.718464203Z" level=info msg="StartContainer for \"e383607fe94e86d0ec157d50e91dd947fb7c21b3e0e3c3f73c4fe37ad90e0c97\"" Jan 30 13:49:21.752386 systemd[1]: Started cri-containerd-e383607fe94e86d0ec157d50e91dd947fb7c21b3e0e3c3f73c4fe37ad90e0c97.scope - libcontainer container e383607fe94e86d0ec157d50e91dd947fb7c21b3e0e3c3f73c4fe37ad90e0c97. Jan 30 13:49:22.003529 containerd[1463]: time="2025-01-30T13:49:22.003370505Z" level=info msg="StartContainer for \"e383607fe94e86d0ec157d50e91dd947fb7c21b3e0e3c3f73c4fe37ad90e0c97\" returns successfully" Jan 30 13:49:22.123347 systemd-networkd[1392]: calidedebaf3bb8: Gained IPv6LL Jan 30 13:49:22.301753 kubelet[2504]: E0130 13:49:22.301713 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:22.466746 kubelet[2504]: I0130 13:49:22.465042 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84cd9657bc-8m9s5" podStartSLOduration=25.057879472 podStartE2EDuration="28.465022391s" podCreationTimestamp="2025-01-30 13:48:54 +0000 UTC" firstStartedPulling="2025-01-30 13:49:18.288587904 +0000 UTC m=+34.343837813" lastFinishedPulling="2025-01-30 13:49:21.695730823 +0000 UTC m=+37.750980732" observedRunningTime="2025-01-30 13:49:22.409304418 +0000 UTC m=+38.464554327" watchObservedRunningTime="2025-01-30 13:49:22.465022391 +0000 UTC m=+38.520272300" Jan 30 13:49:22.466746 kubelet[2504]: I0130 13:49:22.466153 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hhlnc" podStartSLOduration=33.466094936 podStartE2EDuration="33.466094936s" podCreationTimestamp="2025-01-30 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:22.46501675 +0000 UTC m=+38.520266669" watchObservedRunningTime="2025-01-30 13:49:22.466094936 +0000 UTC m=+38.521344845" Jan 30 13:49:23.083397 systemd-networkd[1392]: calic652b05ae66: Gained IPv6LL Jan 30 13:49:23.276649 systemd-networkd[1392]: calid955b39eecb: Gained IPv6LL Jan 30 13:49:23.303286 kubelet[2504]: I0130 13:49:23.303253 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:23.303863 kubelet[2504]: E0130 13:49:23.303821 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:24.236571 containerd[1463]: time="2025-01-30T13:49:24.236518119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:24.277612 containerd[1463]: time="2025-01-30T13:49:24.277556838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:49:24.294246 containerd[1463]: time="2025-01-30T13:49:24.294193556Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:24.305293 kubelet[2504]: E0130 13:49:24.305254 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:24.352939 containerd[1463]: time="2025-01-30T13:49:24.352875614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:24.353980 containerd[1463]: time="2025-01-30T13:49:24.353918653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.657923032s" Jan 30 13:49:24.353980 containerd[1463]: time="2025-01-30T13:49:24.353961824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:49:24.357188 containerd[1463]: time="2025-01-30T13:49:24.357155480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:49:24.358383 containerd[1463]: time="2025-01-30T13:49:24.358351316Z" level=info msg="CreateContainer within sandbox \"cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:49:24.465277 containerd[1463]: time="2025-01-30T13:49:24.465228218Z" level=info msg="CreateContainer within sandbox \"cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8637c94e6ae2ddc44a1cc3cec348142803fded6890ca16ac0460fd2376608b4e\"" Jan 30 13:49:24.465919 containerd[1463]: time="2025-01-30T13:49:24.465761229Z" level=info msg="StartContainer for \"8637c94e6ae2ddc44a1cc3cec348142803fded6890ca16ac0460fd2376608b4e\"" Jan 30 13:49:24.499268 systemd[1]: Started cri-containerd-8637c94e6ae2ddc44a1cc3cec348142803fded6890ca16ac0460fd2376608b4e.scope - libcontainer container 8637c94e6ae2ddc44a1cc3cec348142803fded6890ca16ac0460fd2376608b4e. Jan 30 13:49:24.504471 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:56456.service - OpenSSH per-connection server daemon (10.0.0.1:56456). Jan 30 13:49:24.570250 containerd[1463]: time="2025-01-30T13:49:24.570178153Z" level=info msg="StartContainer for \"8637c94e6ae2ddc44a1cc3cec348142803fded6890ca16ac0460fd2376608b4e\" returns successfully" Jan 30 13:49:24.604545 sshd[4817]: Accepted publickey for core from 10.0.0.1 port 56456 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:24.606464 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:24.610882 systemd-logind[1449]: New session 12 of user core. Jan 30 13:49:24.619303 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:49:24.735902 kubelet[2504]: I0130 13:49:24.735819 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:24.736391 kubelet[2504]: E0130 13:49:24.736348 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:24.772855 sshd[4817]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:24.777447 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:56456.service: Deactivated successfully. Jan 30 13:49:24.780977 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:49:24.781766 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:49:24.787460 systemd-logind[1449]: Removed session 12. Jan 30 13:49:24.953443 containerd[1463]: time="2025-01-30T13:49:24.953366074Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:24.954278 containerd[1463]: time="2025-01-30T13:49:24.954214718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:49:24.956612 containerd[1463]: time="2025-01-30T13:49:24.956569410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 599.368295ms" Jan 30 13:49:24.956612 containerd[1463]: time="2025-01-30T13:49:24.956606840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:49:24.957629 containerd[1463]: time="2025-01-30T13:49:24.957599875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:49:24.958817 containerd[1463]: time="2025-01-30T13:49:24.958663061Z" level=info msg="CreateContainer within sandbox \"7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:49:24.976916 containerd[1463]: time="2025-01-30T13:49:24.976872082Z" level=info msg="CreateContainer within sandbox \"7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d496deaf967bc4ef5ee3bb2383462c212894f11171fc752142390ae9f1a21db\"" Jan 30 13:49:24.977526 containerd[1463]: time="2025-01-30T13:49:24.977483470Z" level=info msg="StartContainer for \"6d496deaf967bc4ef5ee3bb2383462c212894f11171fc752142390ae9f1a21db\"" Jan 30 13:49:25.005277 systemd[1]: Started cri-containerd-6d496deaf967bc4ef5ee3bb2383462c212894f11171fc752142390ae9f1a21db.scope - libcontainer container 6d496deaf967bc4ef5ee3bb2383462c212894f11171fc752142390ae9f1a21db. Jan 30 13:49:25.049846 containerd[1463]: time="2025-01-30T13:49:25.049651081Z" level=info msg="StartContainer for \"6d496deaf967bc4ef5ee3bb2383462c212894f11171fc752142390ae9f1a21db\" returns successfully" Jan 30 13:49:25.310119 kubelet[2504]: E0130 13:49:25.310085 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:49:25.357536 kubelet[2504]: I0130 13:49:25.357467 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84cd9657bc-jd7pp" podStartSLOduration=27.900008513 podStartE2EDuration="31.357449583s" podCreationTimestamp="2025-01-30 13:48:54 +0000 UTC" firstStartedPulling="2025-01-30 13:49:21.499966092 +0000 UTC m=+37.555216001" lastFinishedPulling="2025-01-30 13:49:24.957407162 +0000 UTC m=+41.012657071" observedRunningTime="2025-01-30 13:49:25.357067776 +0000 UTC m=+41.412317685" watchObservedRunningTime="2025-01-30 13:49:25.357449583 +0000 UTC m=+41.412699492" Jan 30 13:49:26.312730 kubelet[2504]: I0130 13:49:26.312646 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:26.398680 containerd[1463]: time="2025-01-30T13:49:26.398629538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:26.399638 containerd[1463]: time="2025-01-30T13:49:26.399553392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:49:26.400849 containerd[1463]: time="2025-01-30T13:49:26.400804732Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:26.402995 containerd[1463]: time="2025-01-30T13:49:26.402957504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:26.403859 containerd[1463]: time="2025-01-30T13:49:26.403826896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.446194862s" Jan 30 13:49:26.403920 containerd[1463]: time="2025-01-30T13:49:26.403865619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:49:26.406864 containerd[1463]: time="2025-01-30T13:49:26.406814966Z" level=info msg="CreateContainer within sandbox \"cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:49:26.421981 containerd[1463]: time="2025-01-30T13:49:26.421928600Z" level=info msg="CreateContainer within sandbox \"cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6174e0efa2e3ef0841f0d66af25afae4d403d6d9c9fd507930f4669c913b1caf\"" Jan 30 13:49:26.422494 containerd[1463]: time="2025-01-30T13:49:26.422455018Z" level=info msg="StartContainer for \"6174e0efa2e3ef0841f0d66af25afae4d403d6d9c9fd507930f4669c913b1caf\"" Jan 30 13:49:26.460289 systemd[1]: Started cri-containerd-6174e0efa2e3ef0841f0d66af25afae4d403d6d9c9fd507930f4669c913b1caf.scope - libcontainer container 6174e0efa2e3ef0841f0d66af25afae4d403d6d9c9fd507930f4669c913b1caf. Jan 30 13:49:26.525202 containerd[1463]: time="2025-01-30T13:49:26.525160444Z" level=info msg="StartContainer for \"6174e0efa2e3ef0841f0d66af25afae4d403d6d9c9fd507930f4669c913b1caf\" returns successfully" Jan 30 13:49:27.077079 kubelet[2504]: I0130 13:49:27.077034 2504 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:49:27.077079 kubelet[2504]: I0130 13:49:27.077069 2504 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:49:27.328662 kubelet[2504]: I0130 13:49:27.328476 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jx5hr" podStartSLOduration=27.226593978 podStartE2EDuration="33.328457656s" podCreationTimestamp="2025-01-30 13:48:54 +0000 UTC" firstStartedPulling="2025-01-30 13:49:20.302816761 +0000 UTC m=+36.358066670" lastFinishedPulling="2025-01-30 13:49:26.404680439 +0000 UTC m=+42.459930348" observedRunningTime="2025-01-30 13:49:27.327631174 +0000 UTC m=+43.382881113" watchObservedRunningTime="2025-01-30 13:49:27.328457656 +0000 UTC m=+43.383707565" Jan 30 13:49:29.784718 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:56458.service - OpenSSH per-connection server daemon (10.0.0.1:56458). Jan 30 13:49:29.852755 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 56458 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:29.854346 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:29.859013 systemd-logind[1449]: New session 13 of user core. Jan 30 13:49:29.869307 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:49:30.019815 sshd[4973]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:30.029943 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:56458.service: Deactivated successfully. Jan 30 13:49:30.032697 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:49:30.034449 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:49:30.040528 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:56462.service - OpenSSH per-connection server daemon (10.0.0.1:56462). Jan 30 13:49:30.041578 systemd-logind[1449]: Removed session 13. Jan 30 13:49:30.074001 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 56462 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:30.075570 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:30.079613 systemd-logind[1449]: New session 14 of user core. Jan 30 13:49:30.088261 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:49:30.265756 sshd[4989]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:30.280734 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:56462.service: Deactivated successfully. Jan 30 13:49:30.282613 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:49:30.284371 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:49:30.291575 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:56466.service - OpenSSH per-connection server daemon (10.0.0.1:56466). Jan 30 13:49:30.292995 systemd-logind[1449]: Removed session 14. Jan 30 13:49:30.325523 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 56466 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:30.327223 sshd[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:30.331437 systemd-logind[1449]: New session 15 of user core. Jan 30 13:49:30.341345 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:49:30.627598 sshd[5002]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:30.632418 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:56466.service: Deactivated successfully. Jan 30 13:49:30.634309 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:49:30.634950 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:49:30.635755 systemd-logind[1449]: Removed session 15. Jan 30 13:49:31.632050 kubelet[2504]: I0130 13:49:31.631952 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:34.647954 kubelet[2504]: I0130 13:49:34.647870 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:35.644415 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:50230.service - OpenSSH per-connection server daemon (10.0.0.1:50230). Jan 30 13:49:35.681944 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 50230 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:35.683490 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:35.687883 systemd-logind[1449]: New session 16 of user core. Jan 30 13:49:35.696264 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:49:35.825842 sshd[5091]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:35.832258 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:50230.service: Deactivated successfully. Jan 30 13:49:35.835419 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:49:35.836241 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:49:35.837124 systemd-logind[1449]: Removed session 16. Jan 30 13:49:40.837261 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:50240.service - OpenSSH per-connection server daemon (10.0.0.1:50240). Jan 30 13:49:40.876374 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 50240 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:40.878114 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:40.882420 systemd-logind[1449]: New session 17 of user core. Jan 30 13:49:40.889307 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:49:41.004176 sshd[5105]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:41.009250 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:50240.service: Deactivated successfully. Jan 30 13:49:41.011471 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:49:41.012068 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:49:41.012995 systemd-logind[1449]: Removed session 17. Jan 30 13:49:44.014658 containerd[1463]: time="2025-01-30T13:49:44.014615789Z" level=info msg="StopPodSandbox for \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\"" Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.047 [WARNING][5135] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jx5hr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e", Pod:"csi-node-driver-jx5hr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidedebaf3bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.047 [INFO][5135] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.047 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" iface="eth0" netns="" Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.047 [INFO][5135] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.047 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.068 [INFO][5145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.068 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.068 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.073 [WARNING][5145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.073 [INFO][5145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.074 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.079355 containerd[1463]: 2025-01-30 13:49:44.076 [INFO][5135] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:44.079867 containerd[1463]: time="2025-01-30T13:49:44.079400839Z" level=info msg="TearDown network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\" successfully" Jan 30 13:49:44.079867 containerd[1463]: time="2025-01-30T13:49:44.079435023Z" level=info msg="StopPodSandbox for \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\" returns successfully" Jan 30 13:49:44.079957 containerd[1463]: time="2025-01-30T13:49:44.079931105Z" level=info msg="RemovePodSandbox for \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\"" Jan 30 13:49:44.082006 containerd[1463]: time="2025-01-30T13:49:44.081980077Z" level=info msg="Forcibly stopping sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\"" Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.115 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jx5hr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cf29ca6-8cdc-4301-bd59-3bfcbeaaabcc", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd9ad94c2c8dd0de23f75991e9918f281cfacb301313c2e13a30156e30780b6e", Pod:"csi-node-driver-jx5hr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidedebaf3bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.116 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.116 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" iface="eth0" netns="" Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.116 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.116 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.138 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.138 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.138 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.142 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.142 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" HandleID="k8s-pod-network.2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Workload="localhost-k8s-csi--node--driver--jx5hr-eth0" Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.143 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.148737 containerd[1463]: 2025-01-30 13:49:44.146 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac" Jan 30 13:49:44.149166 containerd[1463]: time="2025-01-30T13:49:44.148771120Z" level=info msg="TearDown network for sandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\" successfully" Jan 30 13:49:44.157657 containerd[1463]: time="2025-01-30T13:49:44.157619067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:44.157724 containerd[1463]: time="2025-01-30T13:49:44.157683027Z" level=info msg="RemovePodSandbox \"2a5d86839c9410069f5e522fddd39923683d5344c01afc72a390005a9ab6deac\" returns successfully" Jan 30 13:49:44.158362 containerd[1463]: time="2025-01-30T13:49:44.158316735Z" level=info msg="StopPodSandbox for \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\"" Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.192 [WARNING][5197] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0", GenerateName:"calico-apiserver-84cd9657bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e22f5f0e-705c-4efc-8f05-4e9e5c95bf68", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cd9657bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119", Pod:"calico-apiserver-84cd9657bc-jd7pp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic652b05ae66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.192 [INFO][5197] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.192 [INFO][5197] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" iface="eth0" netns="" Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.192 [INFO][5197] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.193 [INFO][5197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.213 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.213 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.213 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.218 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.218 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.219 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.224552 containerd[1463]: 2025-01-30 13:49:44.221 [INFO][5197] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:44.224977 containerd[1463]: time="2025-01-30T13:49:44.224588043Z" level=info msg="TearDown network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\" successfully" Jan 30 13:49:44.224977 containerd[1463]: time="2025-01-30T13:49:44.224612709Z" level=info msg="StopPodSandbox for \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\" returns successfully" Jan 30 13:49:44.225150 containerd[1463]: time="2025-01-30T13:49:44.225098641Z" level=info msg="RemovePodSandbox for \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\"" Jan 30 13:49:44.225186 containerd[1463]: time="2025-01-30T13:49:44.225144357Z" level=info msg="Forcibly stopping sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\"" Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.255 [WARNING][5228] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0", GenerateName:"calico-apiserver-84cd9657bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e22f5f0e-705c-4efc-8f05-4e9e5c95bf68", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cd9657bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eddd07f8fa898d3e5aa58fff7607f08ccff92bcddf00c164beb21651ea1f119", Pod:"calico-apiserver-84cd9657bc-jd7pp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic652b05ae66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.255 [INFO][5228] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.255 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" iface="eth0" netns="" Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.255 [INFO][5228] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.255 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.276 [INFO][5235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.276 [INFO][5235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.277 [INFO][5235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.281 [WARNING][5235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.281 [INFO][5235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" HandleID="k8s-pod-network.4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Workload="localhost-k8s-calico--apiserver--84cd9657bc--jd7pp-eth0" Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.282 [INFO][5235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.287598 containerd[1463]: 2025-01-30 13:49:44.285 [INFO][5228] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3" Jan 30 13:49:44.287598 containerd[1463]: time="2025-01-30T13:49:44.287559832Z" level=info msg="TearDown network for sandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\" successfully" Jan 30 13:49:44.291342 containerd[1463]: time="2025-01-30T13:49:44.291302102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:44.291390 containerd[1463]: time="2025-01-30T13:49:44.291351184Z" level=info msg="RemovePodSandbox \"4ddee2a6b317438b9d77d010cd73acd1571592f8b515da6e5b2203a2794519a3\" returns successfully" Jan 30 13:49:44.291814 containerd[1463]: time="2025-01-30T13:49:44.291788915Z" level=info msg="StopPodSandbox for \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\"" Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.320 [WARNING][5259] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0", GenerateName:"calico-kube-controllers-c68cd766f-", Namespace:"calico-system", SelfLink:"", UID:"16c5035b-314e-415a-86ac-f74316cd7f64", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c68cd766f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1", Pod:"calico-kube-controllers-c68cd766f-p9c29", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81cb2a19bf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.321 [INFO][5259] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.321 [INFO][5259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" iface="eth0" netns="" Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.321 [INFO][5259] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.321 [INFO][5259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.338 [INFO][5266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.338 [INFO][5266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.338 [INFO][5266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.342 [WARNING][5266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.342 [INFO][5266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.343 [INFO][5266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.348107 containerd[1463]: 2025-01-30 13:49:44.345 [INFO][5259] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:44.348528 containerd[1463]: time="2025-01-30T13:49:44.348150324Z" level=info msg="TearDown network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\" successfully" Jan 30 13:49:44.348528 containerd[1463]: time="2025-01-30T13:49:44.348176032Z" level=info msg="StopPodSandbox for \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\" returns successfully" Jan 30 13:49:44.348574 containerd[1463]: time="2025-01-30T13:49:44.348562298Z" level=info msg="RemovePodSandbox for \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\"" Jan 30 13:49:44.348595 containerd[1463]: time="2025-01-30T13:49:44.348580381Z" level=info msg="Forcibly stopping sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\"" Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.378 [WARNING][5288] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0", GenerateName:"calico-kube-controllers-c68cd766f-", Namespace:"calico-system", SelfLink:"", UID:"16c5035b-314e-415a-86ac-f74316cd7f64", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c68cd766f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"194bf444a8e764612de512c4e90aa6c7eb9c013f971fe0b5903a798f9ffe52c1", Pod:"calico-kube-controllers-c68cd766f-p9c29", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81cb2a19bf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.378 [INFO][5288] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.378 [INFO][5288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" iface="eth0" netns="" Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.378 [INFO][5288] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.378 [INFO][5288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.397 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.397 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.397 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.401 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.401 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" HandleID="k8s-pod-network.80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Workload="localhost-k8s-calico--kube--controllers--c68cd766f--p9c29-eth0" Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.402 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.408640 containerd[1463]: 2025-01-30 13:49:44.406 [INFO][5288] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207" Jan 30 13:49:44.409047 containerd[1463]: time="2025-01-30T13:49:44.408682697Z" level=info msg="TearDown network for sandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\" successfully" Jan 30 13:49:44.412596 containerd[1463]: time="2025-01-30T13:49:44.412568666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:44.412654 containerd[1463]: time="2025-01-30T13:49:44.412617648Z" level=info msg="RemovePodSandbox \"80e1fc5250f0d0d24ca281d164bd947c508b37b3412681a861fa4adb8acc4207\" returns successfully" Jan 30 13:49:44.413083 containerd[1463]: time="2025-01-30T13:49:44.413037416Z" level=info msg="StopPodSandbox for \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\"" Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.445 [WARNING][5319] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a4262e5b-77cf-4ec4-9ed8-8944d349be91", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13", Pod:"coredns-6f6b679f8f-6d9h6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4aed67954a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.445 [INFO][5319] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.445 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" iface="eth0" netns="" Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.445 [INFO][5319] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.445 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.467 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.467 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.467 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.471 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.471 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.473 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.478161 containerd[1463]: 2025-01-30 13:49:44.475 [INFO][5319] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:44.478583 containerd[1463]: time="2025-01-30T13:49:44.478228578Z" level=info msg="TearDown network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\" successfully" Jan 30 13:49:44.478583 containerd[1463]: time="2025-01-30T13:49:44.478263623Z" level=info msg="StopPodSandbox for \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\" returns successfully" Jan 30 13:49:44.478796 containerd[1463]: time="2025-01-30T13:49:44.478768271Z" level=info msg="RemovePodSandbox for \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\"" Jan 30 13:49:44.478850 containerd[1463]: time="2025-01-30T13:49:44.478802665Z" level=info msg="Forcibly stopping sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\"" Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.521 [WARNING][5349] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a4262e5b-77cf-4ec4-9ed8-8944d349be91", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e806e76aefce84ca980abb5ad3965a097c7f9b3aee8c36a9a2cb5d471a61bf13", Pod:"coredns-6f6b679f8f-6d9h6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4aed67954a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.522 [INFO][5349] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.522 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" iface="eth0" netns="" Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.522 [INFO][5349] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.522 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.542 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.542 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.542 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.570 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.570 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" HandleID="k8s-pod-network.be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Workload="localhost-k8s-coredns--6f6b679f8f--6d9h6-eth0" Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.571 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.575847 containerd[1463]: 2025-01-30 13:49:44.573 [INFO][5349] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa" Jan 30 13:49:44.576297 containerd[1463]: time="2025-01-30T13:49:44.575891424Z" level=info msg="TearDown network for sandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\" successfully" Jan 30 13:49:44.666114 containerd[1463]: time="2025-01-30T13:49:44.666071876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:44.666226 containerd[1463]: time="2025-01-30T13:49:44.666157607Z" level=info msg="RemovePodSandbox \"be1e3abe383a22b5a480aec0912b149cd376db3a3c7f08d209d2d7b022c74caa\" returns successfully" Jan 30 13:49:44.666675 containerd[1463]: time="2025-01-30T13:49:44.666632530Z" level=info msg="StopPodSandbox for \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\"" Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.697 [WARNING][5378] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0", GenerateName:"calico-apiserver-84cd9657bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d410147-17e9-453a-a04a-5d59f2f808df", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cd9657bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7", Pod:"calico-apiserver-84cd9657bc-8m9s5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd41e7c840b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.697 [INFO][5378] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.697 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" iface="eth0" netns="" Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.697 [INFO][5378] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.697 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.715 [INFO][5385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.715 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.715 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.719 [WARNING][5385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.719 [INFO][5385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.720 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.725439 containerd[1463]: 2025-01-30 13:49:44.722 [INFO][5378] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:44.725866 containerd[1463]: time="2025-01-30T13:49:44.725487300Z" level=info msg="TearDown network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\" successfully" Jan 30 13:49:44.725866 containerd[1463]: time="2025-01-30T13:49:44.725512117Z" level=info msg="StopPodSandbox for \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\" returns successfully" Jan 30 13:49:44.726052 containerd[1463]: time="2025-01-30T13:49:44.726023328Z" level=info msg="RemovePodSandbox for \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\"" Jan 30 13:49:44.726098 containerd[1463]: time="2025-01-30T13:49:44.726059207Z" level=info msg="Forcibly stopping sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\"" Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.760 [WARNING][5408] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0", GenerateName:"calico-apiserver-84cd9657bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d410147-17e9-453a-a04a-5d59f2f808df", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cd9657bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a387862911d07d23b4dfd49ee1e5f6455482669a7459dfb8b3223115f36ff3e7", Pod:"calico-apiserver-84cd9657bc-8m9s5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd41e7c840b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.760 [INFO][5408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.760 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" iface="eth0" netns="" Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.760 [INFO][5408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.760 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.781 [INFO][5417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.781 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.781 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.795 [WARNING][5417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.795 [INFO][5417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" HandleID="k8s-pod-network.f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Workload="localhost-k8s-calico--apiserver--84cd9657bc--8m9s5-eth0" Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.797 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.802059 containerd[1463]: 2025-01-30 13:49:44.799 [INFO][5408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282" Jan 30 13:49:44.802480 containerd[1463]: time="2025-01-30T13:49:44.802102092Z" level=info msg="TearDown network for sandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\" successfully" Jan 30 13:49:44.824866 containerd[1463]: time="2025-01-30T13:49:44.824837688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:44.824989 containerd[1463]: time="2025-01-30T13:49:44.824884085Z" level=info msg="RemovePodSandbox \"f810c10277ab8609c65e8dad41ad0091a499daea41d55f422c90fe13cc298282\" returns successfully" Jan 30 13:49:44.825550 containerd[1463]: time="2025-01-30T13:49:44.825376572Z" level=info msg="StopPodSandbox for \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\"" Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.856 [WARNING][5440] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9a799fbf-c296-42f5-b998-fc9dfc513713", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd", Pod:"coredns-6f6b679f8f-hhlnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid955b39eecb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.856 [INFO][5440] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.856 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" iface="eth0" netns="" Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.856 [INFO][5440] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.856 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.876 [INFO][5447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.876 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.877 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.881 [WARNING][5447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.881 [INFO][5447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.882 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.887408 containerd[1463]: 2025-01-30 13:49:44.884 [INFO][5440] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:44.887408 containerd[1463]: time="2025-01-30T13:49:44.887339451Z" level=info msg="TearDown network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\" successfully" Jan 30 13:49:44.887408 containerd[1463]: time="2025-01-30T13:49:44.887363236Z" level=info msg="StopPodSandbox for \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\" returns successfully" Jan 30 13:49:44.888021 containerd[1463]: time="2025-01-30T13:49:44.887999092Z" level=info msg="RemovePodSandbox for \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\"" Jan 30 13:49:44.888093 containerd[1463]: time="2025-01-30T13:49:44.888077520Z" level=info msg="Forcibly stopping sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\"" Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.923 [WARNING][5470] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9a799fbf-c296-42f5-b998-fc9dfc513713", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4db595fb76fc07e779bdf0d0c41a49d06cf7b7f83371d420aa67df315f902fd", Pod:"coredns-6f6b679f8f-hhlnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid955b39eecb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.923 [INFO][5470] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.923 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" iface="eth0" netns="" Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.923 [INFO][5470] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.923 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.941 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.941 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.941 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.946 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.946 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" HandleID="k8s-pod-network.22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Workload="localhost-k8s-coredns--6f6b679f8f--hhlnc-eth0" Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.947 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:44.951972 containerd[1463]: 2025-01-30 13:49:44.949 [INFO][5470] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37" Jan 30 13:49:44.952392 containerd[1463]: time="2025-01-30T13:49:44.952015354Z" level=info msg="TearDown network for sandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\" successfully" Jan 30 13:49:44.955850 containerd[1463]: time="2025-01-30T13:49:44.955821869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:44.955905 containerd[1463]: time="2025-01-30T13:49:44.955864899Z" level=info msg="RemovePodSandbox \"22609e3e52b925d68b5f8d82dd28ba741436faf4108de006d0160ac0e9523b37\" returns successfully" Jan 30 13:49:46.014230 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:49102.service - OpenSSH per-connection server daemon (10.0.0.1:49102). Jan 30 13:49:46.062220 sshd[5486]: Accepted publickey for core from 10.0.0.1 port 49102 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:46.064267 sshd[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:46.068866 systemd-logind[1449]: New session 18 of user core. Jan 30 13:49:46.083328 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:49:46.212282 sshd[5486]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:46.216746 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:49102.service: Deactivated successfully. Jan 30 13:49:46.219303 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:49:46.220041 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:49:46.220959 systemd-logind[1449]: Removed session 18. Jan 30 13:49:51.227099 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:49308.service - OpenSSH per-connection server daemon (10.0.0.1:49308). Jan 30 13:49:51.270065 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 49308 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:51.272057 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:51.276255 systemd-logind[1449]: New session 19 of user core. Jan 30 13:49:51.281301 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:49:51.402400 sshd[5504]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:51.407467 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:49308.service: Deactivated successfully. Jan 30 13:49:51.410335 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:49:51.411078 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:49:51.412799 systemd-logind[1449]: Removed session 19. Jan 30 13:49:55.078324 kubelet[2504]: I0130 13:49:55.078231 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:56.415422 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:49310.service - OpenSSH per-connection server daemon (10.0.0.1:49310). Jan 30 13:49:56.458266 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 49310 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:56.459783 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:56.465493 systemd-logind[1449]: New session 20 of user core. Jan 30 13:49:56.474440 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:49:56.601383 sshd[5549]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:56.615154 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:49310.service: Deactivated successfully. Jan 30 13:49:56.617115 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:49:56.619044 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:49:56.626517 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:49318.service - OpenSSH per-connection server daemon (10.0.0.1:49318). Jan 30 13:49:56.627488 systemd-logind[1449]: Removed session 20. Jan 30 13:49:56.660640 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 49318 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:56.662451 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:56.666813 systemd-logind[1449]: New session 21 of user core. Jan 30 13:49:56.674252 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:49:56.871054 sshd[5563]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:56.885128 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:49318.service: Deactivated successfully. Jan 30 13:49:56.887066 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:49:56.888993 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:49:56.897402 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:49332.service - OpenSSH per-connection server daemon (10.0.0.1:49332). Jan 30 13:49:56.898312 systemd-logind[1449]: Removed session 21. Jan 30 13:49:56.932677 sshd[5576]: Accepted publickey for core from 10.0.0.1 port 49332 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:56.934202 sshd[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:56.938487 systemd-logind[1449]: New session 22 of user core. Jan 30 13:49:56.949252 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:49:58.447781 sshd[5576]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:58.461974 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:49332.service: Deactivated successfully. Jan 30 13:49:58.465628 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:49:58.468023 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:49:58.484795 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:49334.service - OpenSSH per-connection server daemon (10.0.0.1:49334). Jan 30 13:49:58.487320 systemd-logind[1449]: Removed session 22. Jan 30 13:49:58.524975 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 49334 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:58.527400 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:58.531390 systemd-logind[1449]: New session 23 of user core. Jan 30 13:49:58.540242 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:49:58.740806 sshd[5597]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:58.751253 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:49334.service: Deactivated successfully. Jan 30 13:49:58.753288 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:49:58.755939 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:49:58.764374 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:49344.service - OpenSSH per-connection server daemon (10.0.0.1:49344). Jan 30 13:49:58.765467 systemd-logind[1449]: Removed session 23. Jan 30 13:49:58.798944 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 49344 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:49:58.800366 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:58.804977 systemd-logind[1449]: New session 24 of user core. Jan 30 13:49:58.815261 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:49:58.934535 sshd[5611]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:58.938380 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:49344.service: Deactivated successfully. Jan 30 13:49:58.940657 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:49:58.941356 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:49:58.942209 systemd-logind[1449]: Removed session 24. Jan 30 13:50:03.945836 systemd[1]: Started sshd@24-10.0.0.138:22-10.0.0.1:43456.service - OpenSSH per-connection server daemon (10.0.0.1:43456). Jan 30 13:50:03.981996 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 43456 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:50:03.983377 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:03.987169 systemd-logind[1449]: New session 25 of user core. Jan 30 13:50:04.000252 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:50:04.101889 sshd[5647]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:04.105672 systemd[1]: sshd@24-10.0.0.138:22-10.0.0.1:43456.service: Deactivated successfully. Jan 30 13:50:04.107558 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:50:04.108146 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:50:04.108942 systemd-logind[1449]: Removed session 25. Jan 30 13:50:09.115724 systemd[1]: Started sshd@25-10.0.0.138:22-10.0.0.1:43462.service - OpenSSH per-connection server daemon (10.0.0.1:43462). Jan 30 13:50:09.153757 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 43462 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:50:09.155536 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:09.159830 systemd-logind[1449]: New session 26 of user core. Jan 30 13:50:09.165343 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:50:09.269185 sshd[5664]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:09.273319 systemd[1]: sshd@25-10.0.0.138:22-10.0.0.1:43462.service: Deactivated successfully. Jan 30 13:50:09.275851 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:50:09.276596 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:50:09.277595 systemd-logind[1449]: Removed session 26. Jan 30 13:50:12.023223 kubelet[2504]: E0130 13:50:12.023168 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:50:14.023413 kubelet[2504]: E0130 13:50:14.023369 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:50:14.282842 systemd[1]: Started sshd@26-10.0.0.138:22-10.0.0.1:58456.service - OpenSSH per-connection server daemon (10.0.0.1:58456). Jan 30 13:50:14.327094 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 58456 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:50:14.329389 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:14.334603 systemd-logind[1449]: New session 27 of user core. Jan 30 13:50:14.340265 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:50:14.449546 sshd[5678]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:14.454108 systemd[1]: sshd@26-10.0.0.138:22-10.0.0.1:58456.service: Deactivated successfully. Jan 30 13:50:14.456844 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:50:14.457655 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:50:14.458586 systemd-logind[1449]: Removed session 27. Jan 30 13:50:15.023306 kubelet[2504]: E0130 13:50:15.023267 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:50:19.461021 systemd[1]: Started sshd@27-10.0.0.138:22-10.0.0.1:58464.service - OpenSSH per-connection server daemon (10.0.0.1:58464). Jan 30 13:50:19.499877 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 58464 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:50:19.501696 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:19.505734 systemd-logind[1449]: New session 28 of user core. Jan 30 13:50:19.515256 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:50:19.621969 sshd[5693]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:19.625636 systemd[1]: sshd@27-10.0.0.138:22-10.0.0.1:58464.service: Deactivated successfully. Jan 30 13:50:19.627774 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:50:19.628412 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:50:19.629225 systemd-logind[1449]: Removed session 28. Jan 30 13:50:21.023332 kubelet[2504]: E0130 13:50:21.023282 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"