Dec 13 01:26:25.889257 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:26:25.889278 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:25.889289 kernel: BIOS-provided physical RAM map: Dec 13 01:26:25.889296 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:26:25.889301 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:26:25.889307 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:26:25.889314 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:26:25.889320 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:26:25.889326 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:26:25.889332 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:26:25.889341 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:26:25.889347 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:26:25.889353 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:26:25.889359 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:26:25.889367 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:26:25.889373 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:26:25.889382 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:26:25.889389 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:26:25.889395 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:26:25.889401 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:26:25.889408 kernel: NX (Execute Disable) protection: active Dec 13 01:26:25.889414 kernel: APIC: Static calls initialized Dec 13 01:26:25.889420 kernel: efi: EFI v2.7 by EDK II Dec 13 01:26:25.889427 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:26:25.889433 kernel: SMBIOS 2.8 present. Dec 13 01:26:25.889440 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:26:25.889446 kernel: Hypervisor detected: KVM Dec 13 01:26:25.889455 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:26:25.889461 kernel: kvm-clock: using sched offset of 4170334134 cycles Dec 13 01:26:25.889468 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:26:25.889475 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:26:25.889482 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:26:25.889489 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:26:25.889495 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:26:25.889502 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:26:25.889509 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:26:25.889518 kernel: Using GB pages for direct mapping Dec 13 01:26:25.889525 kernel: Secure boot disabled Dec 13 01:26:25.889531 kernel: ACPI: Early table checksum verification disabled Dec 13 01:26:25.889538 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:26:25.889549 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:26:25.889555 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:25.889562 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:25.889572 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:26:25.889579 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:25.889586 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:25.889593 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:25.889600 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:25.889607 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:26:25.889614 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:26:25.889718 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:26:25.889726 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:26:25.889732 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:26:25.889739 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:26:25.889746 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:26:25.889753 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:26:25.889760 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:26:25.889766 kernel: No NUMA configuration found Dec 13 01:26:25.889774 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:26:25.889783 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:26:25.889790 kernel: Zone ranges: Dec 13 01:26:25.889797 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:26:25.889804 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:26:25.889811 kernel: Normal empty Dec 13 01:26:25.889817 kernel: Movable zone start for each node Dec 13 01:26:25.889824 kernel: Early memory node ranges Dec 13 01:26:25.889831 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:26:25.889838 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:26:25.889845 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:26:25.889854 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:26:25.889861 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:26:25.889868 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:26:25.889875 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:26:25.889881 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:26:25.889888 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:26:25.889895 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:26:25.889902 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:26:25.889916 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:26:25.889927 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:26:25.889933 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:26:25.889940 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:26:25.889947 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:26:25.889954 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:26:25.889961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:26:25.889968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:26:25.889974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:26:25.889981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:26:25.889988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:26:25.889998 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:26:25.890004 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:26:25.890011 kernel: TSC deadline timer available Dec 13 01:26:25.890018 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:26:25.890025 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:26:25.890032 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:26:25.890038 kernel: kvm-guest: setup PV sched yield Dec 13 01:26:25.890045 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:26:25.890052 kernel: Booting paravirtualized kernel on KVM Dec 13 01:26:25.890062 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:26:25.890069 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:26:25.890076 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:26:25.890082 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:26:25.890089 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:26:25.890096 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:26:25.890103 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:26:25.890111 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:25.890121 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:26:25.890128 kernel: random: crng init done Dec 13 01:26:25.890135 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:26:25.890142 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:26:25.890149 kernel: Fallback order for Node 0: 0 Dec 13 01:26:25.890156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:26:25.890162 kernel: Policy zone: DMA32 Dec 13 01:26:25.890169 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:26:25.890176 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Dec 13 01:26:25.890186 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:26:25.890193 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:26:25.890200 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:26:25.890207 kernel: Dynamic Preempt: voluntary Dec 13 01:26:25.890222 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:26:25.890234 kernel: rcu: RCU event tracing is enabled. Dec 13 01:26:25.890245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:26:25.890254 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:26:25.890264 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:26:25.890273 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:26:25.890283 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:26:25.890290 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:26:25.890300 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:26:25.890308 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:26:25.890315 kernel: Console: colour dummy device 80x25 Dec 13 01:26:25.890322 kernel: printk: console [ttyS0] enabled Dec 13 01:26:25.890329 kernel: ACPI: Core revision 20230628 Dec 13 01:26:25.890339 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:26:25.890346 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:26:25.890354 kernel: x2apic enabled Dec 13 01:26:25.890361 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:26:25.890368 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:26:25.890376 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:26:25.890383 kernel: kvm-guest: setup PV IPIs Dec 13 01:26:25.890390 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:26:25.890397 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:26:25.890407 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:26:25.890414 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:26:25.890421 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:26:25.890428 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:26:25.890435 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:26:25.890442 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:26:25.890450 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:26:25.890457 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:26:25.890464 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:26:25.890474 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:26:25.890481 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:26:25.890488 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:26:25.890496 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:26:25.890503 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:26:25.890510 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:26:25.890518 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:26:25.890525 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:26:25.890535 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:26:25.890542 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:26:25.890558 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:26:25.890566 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:26:25.890573 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:26:25.890587 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:26:25.890600 kernel: landlock: Up and running. Dec 13 01:26:25.890608 kernel: SELinux: Initializing. Dec 13 01:26:25.890641 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:26:25.890664 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:26:25.890678 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:26:25.890692 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:26:25.890706 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:26:25.890714 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:26:25.890733 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:26:25.890740 kernel: ... version: 0 Dec 13 01:26:25.890748 kernel: ... bit width: 48 Dec 13 01:26:25.890755 kernel: ... generic registers: 6 Dec 13 01:26:25.890765 kernel: ... value mask: 0000ffffffffffff Dec 13 01:26:25.890772 kernel: ... max period: 00007fffffffffff Dec 13 01:26:25.890779 kernel: ... fixed-purpose events: 0 Dec 13 01:26:25.890786 kernel: ... event mask: 000000000000003f Dec 13 01:26:25.890793 kernel: signal: max sigframe size: 1776 Dec 13 01:26:25.890800 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:26:25.890808 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:26:25.890815 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:26:25.890822 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:26:25.890832 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:26:25.890839 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:26:25.890846 kernel: smpboot: Max logical packages: 1 Dec 13 01:26:25.890853 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:26:25.890860 kernel: devtmpfs: initialized Dec 13 01:26:25.890867 kernel: x86/mm: Memory block size: 128MB Dec 13 01:26:25.890874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:26:25.890882 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:26:25.890889 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:26:25.890899 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:26:25.890912 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:26:25.890919 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:26:25.890926 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:26:25.890933 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:26:25.890941 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:26:25.890948 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:26:25.890955 kernel: audit: type=2000 audit(1734053185.442:1): state=initialized audit_enabled=0 res=1 Dec 13 01:26:25.890962 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:26:25.890972 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:26:25.890979 kernel: cpuidle: using governor menu Dec 13 01:26:25.890986 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:26:25.890994 kernel: dca service started, version 1.12.1 Dec 13 01:26:25.891001 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:26:25.891008 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:26:25.891015 kernel: PCI: Using configuration type 1 for base access Dec 13 01:26:25.891023 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:26:25.891030 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:26:25.891039 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:26:25.891047 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:26:25.891054 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:26:25.891061 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:26:25.891068 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:26:25.891075 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:26:25.891082 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:26:25.891089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:26:25.891096 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:26:25.891106 kernel: ACPI: Interpreter enabled Dec 13 01:26:25.891113 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:26:25.891120 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:26:25.891127 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:26:25.891134 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:26:25.891142 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:26:25.891149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:26:25.891327 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:26:25.891457 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:26:25.891577 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:26:25.891587 kernel: PCI host bridge to bus 0000:00 Dec 13 01:26:25.891722 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:26:25.891834 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:26:25.892009 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:26:25.892164 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:26:25.892320 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:26:25.892432 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:26:25.892539 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:26:25.892689 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:26:25.892817 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:26:25.892944 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:26:25.893068 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:26:25.893185 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:26:25.893300 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:26:25.893455 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:26:25.893611 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:26:25.893749 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:26:25.893869 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:26:25.894003 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:26:25.894132 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:26:25.894250 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:26:25.894416 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:26:25.894539 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:26:25.894680 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:26:25.894804 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:26:25.894939 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:26:25.895060 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:26:25.895178 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:26:25.895304 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:26:25.895423 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:26:25.895554 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:26:25.895701 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:26:25.895819 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:26:25.895957 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:26:25.896077 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:26:25.896087 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:26:25.896094 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:26:25.896102 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:26:25.896109 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:26:25.896121 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:26:25.896128 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:26:25.896135 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:26:25.896143 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:26:25.896150 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:26:25.896157 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:26:25.896164 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:26:25.896172 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:26:25.896179 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:26:25.896189 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:26:25.896196 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:26:25.896203 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:26:25.896211 kernel: iommu: Default domain type: Translated Dec 13 01:26:25.896218 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:26:25.896225 kernel: efivars: Registered efivars operations Dec 13 01:26:25.896232 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:26:25.896239 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:26:25.896247 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:26:25.896256 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:26:25.896263 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:26:25.896270 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:26:25.896389 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:26:25.896506 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:26:25.896638 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:26:25.896649 kernel: vgaarb: loaded Dec 13 01:26:25.896656 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:26:25.896664 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:26:25.896675 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:26:25.896682 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:26:25.896689 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:26:25.896697 kernel: pnp: PnP ACPI init Dec 13 01:26:25.896826 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:26:25.896836 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:26:25.896844 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:26:25.896851 kernel: NET: Registered PF_INET protocol family Dec 13 01:26:25.896862 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:26:25.896870 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:26:25.896877 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:26:25.896884 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:26:25.896892 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:26:25.896899 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:26:25.896913 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:26:25.896921 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:26:25.896928 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:26:25.896938 kernel: NET: Registered PF_XDP protocol family Dec 13 01:26:25.897059 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:26:25.897181 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:26:25.897292 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:26:25.897401 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:26:25.897510 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:26:25.897633 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:26:25.897755 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:26:25.897868 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:26:25.897877 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:26:25.897885 kernel: Initialise system trusted keyrings Dec 13 01:26:25.897892 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:26:25.897900 kernel: Key type asymmetric registered Dec 13 01:26:25.897915 kernel: Asymmetric key parser 'x509' registered Dec 13 01:26:25.897923 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:26:25.897930 kernel: io scheduler mq-deadline registered Dec 13 01:26:25.897941 kernel: io scheduler kyber registered Dec 13 01:26:25.897948 kernel: io scheduler bfq registered Dec 13 01:26:25.897955 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:26:25.897963 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:26:25.897970 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:26:25.897978 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:26:25.897985 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:26:25.897992 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:26:25.898000 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:26:25.898007 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:26:25.898017 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:26:25.898140 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:26:25.898151 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:26:25.898260 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:26:25.898371 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:26:25 UTC (1734053185) Dec 13 01:26:25.898498 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:26:25.898508 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:26:25.898520 kernel: efifb: probing for efifb Dec 13 01:26:25.898527 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:26:25.898534 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:26:25.898542 kernel: efifb: scrolling: redraw Dec 13 01:26:25.898549 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:26:25.898557 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:26:25.898585 kernel: fb0: EFI VGA frame buffer device Dec 13 01:26:25.898595 kernel: pstore: Using crash dump compression: deflate Dec 13 01:26:25.898602 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:26:25.898612 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:26:25.898637 kernel: Segment Routing with IPv6 Dec 13 01:26:25.898645 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:26:25.898652 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:26:25.898660 kernel: Key type dns_resolver registered Dec 13 01:26:25.898667 kernel: IPI shorthand broadcast: enabled Dec 13 01:26:25.898675 kernel: sched_clock: Marking stable (561003069, 115023973)->(724730699, -48703657) Dec 13 01:26:25.898682 kernel: registered taskstats version 1 Dec 13 01:26:25.898690 kernel: Loading compiled-in X.509 certificates Dec 13 01:26:25.898697 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:26:25.898708 kernel: Key type .fscrypt registered Dec 13 01:26:25.898715 kernel: Key type fscrypt-provisioning registered Dec 13 01:26:25.898722 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:26:25.898730 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:26:25.898737 kernel: ima: No architecture policies found Dec 13 01:26:25.898745 kernel: clk: Disabling unused clocks Dec 13 01:26:25.898752 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:26:25.898760 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:26:25.898770 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:26:25.898777 kernel: Run /init as init process Dec 13 01:26:25.898785 kernel: with arguments: Dec 13 01:26:25.898792 kernel: /init Dec 13 01:26:25.898799 kernel: with environment: Dec 13 01:26:25.898809 kernel: HOME=/ Dec 13 01:26:25.898817 kernel: TERM=linux Dec 13 01:26:25.898824 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:26:25.898834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:25.898846 systemd[1]: Detected virtualization kvm. Dec 13 01:26:25.898855 systemd[1]: Detected architecture x86-64. Dec 13 01:26:25.898863 systemd[1]: Running in initrd. Dec 13 01:26:25.898873 systemd[1]: No hostname configured, using default hostname. Dec 13 01:26:25.898883 systemd[1]: Hostname set to . Dec 13 01:26:25.898891 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:26:25.898899 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:26:25.898913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:25.898921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:25.898930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:26:25.898938 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:25.898946 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:26:25.898957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:26:25.898966 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:26:25.898975 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:26:25.898983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:25.898991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:25.898998 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:25.899006 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:25.899017 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:25.899025 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:25.899033 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:25.899041 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:25.899049 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:26:25.899057 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:26:25.899064 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:25.899073 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:25.899083 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:25.899091 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:25.899099 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:26:25.899107 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:25.899115 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:26:25.899124 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:26:25.899131 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:25.899139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:25.899147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:25.899158 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:25.899166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:25.899174 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:26:25.899200 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:26:25.899221 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:25.899230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:25.899238 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:25.899246 systemd-journald[193]: Journal started Dec 13 01:26:25.899266 systemd-journald[193]: Runtime Journal (/run/log/journal/66513bbd5b3e4058a6c2f7dc8822c73d) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:26:25.892339 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:26:25.901666 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:25.914792 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:25.917286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:25.918441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:25.924638 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:26:25.927116 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:26:25.929655 kernel: Bridge firewalling registered Dec 13 01:26:25.928467 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:25.932813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:25.933505 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:25.940118 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:25.942230 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:25.946339 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:26:25.950138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:25.953695 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:25.962916 dracut-cmdline[224]: dracut-dracut-053 Dec 13 01:26:25.965629 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:25.986854 systemd-resolved[227]: Positive Trust Anchors: Dec 13 01:26:25.986871 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:25.986902 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:25.989426 systemd-resolved[227]: Defaulting to hostname 'linux'. Dec 13 01:26:25.990442 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:25.995944 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:26.059650 kernel: SCSI subsystem initialized Dec 13 01:26:26.068641 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:26:26.079644 kernel: iscsi: registered transport (tcp) Dec 13 01:26:26.099635 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:26:26.099655 kernel: QLogic iSCSI HBA Driver Dec 13 01:26:26.147489 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:26.159739 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:26:26.183086 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:26:26.183113 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:26:26.184093 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:26:26.224649 kernel: raid6: avx2x4 gen() 29534 MB/s Dec 13 01:26:26.241682 kernel: raid6: avx2x2 gen() 29831 MB/s Dec 13 01:26:26.258744 kernel: raid6: avx2x1 gen() 25917 MB/s Dec 13 01:26:26.258782 kernel: raid6: using algorithm avx2x2 gen() 29831 MB/s Dec 13 01:26:26.276748 kernel: raid6: .... xor() 19905 MB/s, rmw enabled Dec 13 01:26:26.276776 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:26:26.297662 kernel: xor: automatically using best checksumming function avx Dec 13 01:26:26.449656 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:26:26.462608 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:26.481836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:26.493233 systemd-udevd[411]: Using default interface naming scheme 'v255'. Dec 13 01:26:26.497731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:26.500706 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:26:26.518390 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Dec 13 01:26:26.551001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:26.566770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:26.629882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:26.645839 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:26:26.667593 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:26:26.685666 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:26:26.685824 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:26:26.685836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:26:26.685846 kernel: GPT:9289727 != 19775487 Dec 13 01:26:26.685856 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:26:26.685866 kernel: GPT:9289727 != 19775487 Dec 13 01:26:26.685876 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:26:26.685892 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:26:26.666008 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:26.670159 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:26.671700 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:26.673780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:26.683778 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:26:26.698116 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:26:26.698135 kernel: AES CTR mode by8 optimization enabled Dec 13 01:26:26.704912 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:26.712696 kernel: libata version 3.00 loaded. Dec 13 01:26:26.721697 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (465) Dec 13 01:26:26.722943 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:26:26.742124 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:26:26.742138 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:26:26.742305 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:26:26.742481 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Dec 13 01:26:26.742494 kernel: scsi host0: ahci Dec 13 01:26:26.742685 kernel: scsi host1: ahci Dec 13 01:26:26.742838 kernel: scsi host2: ahci Dec 13 01:26:26.742992 kernel: scsi host3: ahci Dec 13 01:26:26.743185 kernel: scsi host4: ahci Dec 13 01:26:26.743340 kernel: scsi host5: ahci Dec 13 01:26:26.743487 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:26:26.743507 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:26:26.743521 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:26:26.743534 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:26:26.743547 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:26:26.743559 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:26:26.743586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:26:26.751759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:26:26.757663 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:26:26.758238 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:26:26.763508 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:26:26.777812 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:26:26.779020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:26.779086 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:26.782077 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:26.792078 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:26:26.792101 disk-uuid[549]: Primary Header is updated. Dec 13 01:26:26.792101 disk-uuid[549]: Secondary Entries is updated. Dec 13 01:26:26.792101 disk-uuid[549]: Secondary Header is updated. Dec 13 01:26:26.795542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:26:26.784064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:26.784115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:26.787029 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:26.788820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:26.805974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:26.817107 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:26.836962 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:27.048645 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:27.048726 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:27.048737 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:27.049651 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:26:27.050648 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:27.050671 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:27.051647 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:26:27.052819 kernel: ata3.00: applying bridge limits Dec 13 01:26:27.052831 kernel: ata3.00: configured for UDMA/100 Dec 13 01:26:27.053655 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:26:27.102197 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:26:27.114311 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:26:27.114337 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:26:27.806568 disk-uuid[550]: The operation has completed successfully. Dec 13 01:26:27.807914 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:26:27.835967 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:26:27.836089 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:26:27.853838 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:26:27.856893 sh[591]: Success Dec 13 01:26:27.869645 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:26:27.902100 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:26:27.916997 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:26:27.924351 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:26:27.932669 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:26:27.932700 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:27.932710 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:26:27.933691 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:26:27.935058 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:26:27.939433 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:26:27.955657 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:26:27.967734 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:26:27.970147 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:26:27.978465 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:27.978499 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:27.978511 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:26:27.981649 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:26:27.990731 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:26:27.992209 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:28.075684 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:28.088816 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:28.095975 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:26:28.098917 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:26:28.115772 systemd-networkd[770]: lo: Link UP Dec 13 01:26:28.115785 systemd-networkd[770]: lo: Gained carrier Dec 13 01:26:28.117690 systemd-networkd[770]: Enumeration completed Dec 13 01:26:28.118110 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:28.118139 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:28.118145 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:28.119116 systemd-networkd[770]: eth0: Link UP Dec 13 01:26:28.119121 systemd-networkd[770]: eth0: Gained carrier Dec 13 01:26:28.119129 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:28.120477 systemd[1]: Reached target network.target - Network. Dec 13 01:26:28.137972 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:26:28.162684 ignition[773]: Ignition 2.19.0 Dec 13 01:26:28.162695 ignition[773]: Stage: fetch-offline Dec 13 01:26:28.162733 ignition[773]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:28.162742 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:28.162856 ignition[773]: parsed url from cmdline: "" Dec 13 01:26:28.162860 ignition[773]: no config URL provided Dec 13 01:26:28.162866 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:28.162875 ignition[773]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:28.162902 ignition[773]: op(1): [started] loading QEMU firmware config module Dec 13 01:26:28.162907 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:26:28.171866 ignition[773]: op(1): [finished] loading QEMU firmware config module Dec 13 01:26:28.216820 ignition[773]: parsing config with SHA512: 3dacaeafb79c3f6021ce44158f230a502b977205a9c118e614c905311e4053ad72e804ea97701f64606dfd7e1ec213c3ef5adc6449920f727c1574e37687e03e Dec 13 01:26:28.222011 unknown[773]: fetched base config from "system" Dec 13 01:26:28.222031 unknown[773]: fetched user config from "qemu" Dec 13 01:26:28.223643 ignition[773]: fetch-offline: fetch-offline passed Dec 13 01:26:28.223765 ignition[773]: Ignition finished successfully Dec 13 01:26:28.225558 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:28.228787 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:26:28.249998 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:26:28.264151 ignition[784]: Ignition 2.19.0 Dec 13 01:26:28.264161 ignition[784]: Stage: kargs Dec 13 01:26:28.264329 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:28.264341 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:28.265290 ignition[784]: kargs: kargs passed Dec 13 01:26:28.265349 ignition[784]: Ignition finished successfully Dec 13 01:26:28.269139 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:26:28.283975 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:26:28.306918 ignition[792]: Ignition 2.19.0 Dec 13 01:26:28.306928 ignition[792]: Stage: disks Dec 13 01:26:28.307110 ignition[792]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:28.307122 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:28.308005 ignition[792]: disks: disks passed Dec 13 01:26:28.310423 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:26:28.308046 ignition[792]: Ignition finished successfully Dec 13 01:26:28.311780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:28.313289 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:26:28.315442 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:28.317320 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:28.319408 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:28.343891 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:26:28.355998 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:26:28.363210 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:26:28.379765 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:26:28.507653 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:26:28.508440 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:26:28.509343 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:28.523729 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:28.525643 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:26:28.526303 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:26:28.526343 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:26:28.535299 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Dec 13 01:26:28.526364 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:28.539547 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:28.539578 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:28.539592 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:26:28.541652 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:26:28.542950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:28.564768 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:26:28.567747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:26:28.612137 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:26:28.617754 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:26:28.622887 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:26:28.627741 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:26:28.719564 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:28.731729 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:26:28.732896 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:26:28.740683 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:28.813595 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:26:28.815579 ignition[923]: INFO : Ignition 2.19.0 Dec 13 01:26:28.815579 ignition[923]: INFO : Stage: mount Dec 13 01:26:28.815579 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:28.815579 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:28.819454 ignition[923]: INFO : mount: mount passed Dec 13 01:26:28.819454 ignition[923]: INFO : Ignition finished successfully Dec 13 01:26:28.818701 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:26:28.828709 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:26:28.932202 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:26:28.947960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:28.954921 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Dec 13 01:26:28.954947 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:28.954958 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:28.956645 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:26:28.959647 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:26:28.960681 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:28.985238 ignition[955]: INFO : Ignition 2.19.0 Dec 13 01:26:28.985238 ignition[955]: INFO : Stage: files Dec 13 01:26:28.986931 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:28.986931 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:28.986931 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:26:28.990537 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:26:28.990537 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:26:28.990537 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:26:28.990537 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:26:28.996774 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:26:28.996774 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:26:28.996774 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:26:28.996774 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:26:28.996774 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:26:28.990898 unknown[955]: wrote ssh authorized keys file for user: core Dec 13 01:26:29.032460 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:26:29.155855 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:26:29.155855 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:26:29.159870 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:26:29.440758 systemd-networkd[770]: eth0: Gained IPv6LL Dec 13 01:26:29.516296 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:26:30.084063 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:26:30.084063 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 01:26:30.088349 ignition[955]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:26:30.114811 ignition[955]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:26:30.120443 ignition[955]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:26:30.122289 ignition[955]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:26:30.122289 ignition[955]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:30.122289 ignition[955]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:30.122289 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:30.122289 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:30.122289 ignition[955]: INFO : files: files passed Dec 13 01:26:30.122289 ignition[955]: INFO : Ignition finished successfully Dec 13 01:26:30.124230 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:26:30.139800 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:26:30.142659 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:26:30.144635 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:26:30.144744 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:26:30.152258 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:26:30.155925 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:30.155925 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:30.159181 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:30.161878 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:30.165111 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:26:30.179835 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:26:30.205066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:26:30.206187 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:26:30.208851 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:26:30.210907 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:26:30.213094 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:26:30.225960 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:26:30.240999 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:30.246845 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:26:30.257297 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:30.259954 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:30.262864 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:26:30.265042 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:26:30.266186 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:30.269292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:26:30.271813 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:26:30.274218 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:26:30.276439 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:30.278769 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:30.281036 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:26:30.283112 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:30.285611 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:26:30.287721 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:26:30.289824 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:26:30.291478 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:26:30.292544 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:30.294873 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:30.297153 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:30.299728 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:26:30.300879 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:30.303660 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:26:30.304877 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:30.307803 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:26:30.309078 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:30.311698 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:26:30.313590 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:26:30.314929 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:30.318433 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:26:30.320856 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:26:30.323035 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:26:30.323933 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:30.325926 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:26:30.326904 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:30.329109 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:26:30.330328 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:30.332949 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:26:30.334008 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:26:30.344982 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:26:30.348152 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:26:30.350259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:26:30.351539 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:30.354327 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:26:30.355558 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:30.361804 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:26:30.364777 ignition[1009]: INFO : Ignition 2.19.0 Dec 13 01:26:30.364777 ignition[1009]: INFO : Stage: umount Dec 13 01:26:30.364777 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:30.364777 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:30.364777 ignition[1009]: INFO : umount: umount passed Dec 13 01:26:30.364777 ignition[1009]: INFO : Ignition finished successfully Dec 13 01:26:30.361920 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:26:30.365322 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:26:30.365465 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:26:30.367371 systemd[1]: Stopped target network.target - Network. Dec 13 01:26:30.368722 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:26:30.368803 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:26:30.370980 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:26:30.371041 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:26:30.373375 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:26:30.373434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:26:30.375300 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:26:30.375359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:30.377836 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:26:30.380267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:26:30.382687 systemd-networkd[770]: eth0: DHCPv6 lease lost Dec 13 01:26:30.383571 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:26:30.385421 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:26:30.385571 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:26:30.387149 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:26:30.387198 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:30.395732 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:30.396840 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:30.396908 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:30.399639 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:30.403044 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:26:30.403155 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:30.407849 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:30.407954 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:30.409822 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:30.409890 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:30.411914 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:30.411964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:30.415052 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:30.415231 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:30.417513 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:30.417659 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:30.420752 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:30.420845 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:30.423151 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:30.423200 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:30.425131 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:30.425182 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:30.427629 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:30.427679 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:30.429679 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:30.429730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:30.439776 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:30.441437 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:30.441512 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:30.444185 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:26:30.444236 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:30.446914 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:30.446971 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:30.448516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:30.448565 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:30.451230 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:30.451341 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:30.583469 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:26:30.583600 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:26:30.585743 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:30.587721 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:26:30.587773 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:30.601835 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:30.608763 systemd[1]: Switching root. Dec 13 01:26:30.637934 systemd-journald[193]: Journal stopped Dec 13 01:26:31.836304 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:31.836373 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:26:31.836394 kernel: SELinux: policy capability open_perms=1 Dec 13 01:26:31.836405 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:26:31.836420 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:26:31.836432 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:26:31.836443 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:26:31.836459 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:26:31.836474 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:26:31.836485 kernel: audit: type=1403 audit(1734053191.124:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:26:31.836500 systemd[1]: Successfully loaded SELinux policy in 42.432ms. Dec 13 01:26:31.836515 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.665ms. Dec 13 01:26:31.836527 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:31.836541 systemd[1]: Detected virtualization kvm. Dec 13 01:26:31.836553 systemd[1]: Detected architecture x86-64. Dec 13 01:26:31.836565 systemd[1]: Detected first boot. Dec 13 01:26:31.836582 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:26:31.836596 zram_generator::config[1074]: No configuration found. Dec 13 01:26:31.836609 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:26:31.840572 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:26:31.840605 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:26:31.840631 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:26:31.840644 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:26:31.840658 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:26:31.840670 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:26:31.840690 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:26:31.840702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:26:31.840714 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:26:31.840726 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:26:31.840738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:31.840760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:31.840772 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:26:31.840784 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:26:31.840800 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:26:31.840812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:31.840824 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:26:31.840836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:31.840848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:26:31.840859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:31.840871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:31.840884 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:31.840898 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:31.840910 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:26:31.840921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:26:31.840933 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:26:31.840946 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:26:31.840957 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:31.840970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:31.840983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:31.840995 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:26:31.841007 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:26:31.841021 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:26:31.841033 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:26:31.841045 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:31.841057 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:26:31.841068 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:26:31.841080 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:26:31.841092 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:26:31.841104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:31.841118 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:31.841130 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:26:31.841142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:31.841161 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:31.841173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:31.841184 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:26:31.841197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:31.841210 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:26:31.841222 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:26:31.841236 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:26:31.841250 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:31.841262 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:31.841274 kernel: fuse: init (API version 7.39) Dec 13 01:26:31.841287 kernel: loop: module loaded Dec 13 01:26:31.841299 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:26:31.841311 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:26:31.841323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:31.841338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:31.841384 systemd-journald[1159]: Collecting audit messages is disabled. Dec 13 01:26:31.841412 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:26:31.841424 kernel: ACPI: bus type drm_connector registered Dec 13 01:26:31.841435 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:26:31.841447 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:26:31.841464 systemd-journald[1159]: Journal started Dec 13 01:26:31.841488 systemd-journald[1159]: Runtime Journal (/run/log/journal/66513bbd5b3e4058a6c2f7dc8822c73d) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:26:31.843650 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:31.845095 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:26:31.846387 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:26:31.847699 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:26:31.849079 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:26:31.850639 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:31.852189 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:26:31.852405 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:26:31.853921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:31.854131 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:31.855576 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:31.855806 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:31.857179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:31.857399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:31.858967 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:26:31.859178 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:26:31.860577 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:31.860825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:31.862364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:31.863922 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:26:31.865579 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:26:31.886273 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:26:31.895564 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:26:31.898111 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:26:31.899252 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:26:31.902950 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:26:31.906177 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:26:31.907341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:31.910665 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:26:31.911886 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:31.916122 systemd-journald[1159]: Time spent on flushing to /var/log/journal/66513bbd5b3e4058a6c2f7dc8822c73d is 17.944ms for 979 entries. Dec 13 01:26:31.916122 systemd-journald[1159]: System Journal (/var/log/journal/66513bbd5b3e4058a6c2f7dc8822c73d) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:26:31.942093 systemd-journald[1159]: Received client request to flush runtime journal. Dec 13 01:26:31.915777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:31.925764 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:31.928568 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:26:31.930845 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:26:31.940752 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:26:31.945431 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:26:31.948547 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:26:31.956584 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:31.965109 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Dec 13 01:26:31.965129 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Dec 13 01:26:31.970871 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:31.986855 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:26:31.988383 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:31.992037 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:26:32.006025 udevadm[1226]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:26:32.013591 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:26:32.025769 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:32.042667 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Dec 13 01:26:32.042687 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Dec 13 01:26:32.048349 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:32.546013 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:26:32.558958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:32.587189 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Dec 13 01:26:32.604156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:32.615769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:32.627774 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:26:32.635959 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:26:32.666680 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1246) Dec 13 01:26:32.671902 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1252) Dec 13 01:26:32.685665 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1246) Dec 13 01:26:32.723975 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:26:32.743665 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:26:32.747650 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:26:32.753656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:26:32.766097 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:26:32.775180 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:26:32.775366 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:26:32.775379 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:26:32.775612 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:26:32.826655 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:26:32.837515 systemd-networkd[1242]: lo: Link UP Dec 13 01:26:32.838333 systemd-networkd[1242]: lo: Gained carrier Dec 13 01:26:32.839956 systemd-networkd[1242]: Enumeration completed Dec 13 01:26:32.840364 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:32.840369 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:32.845142 systemd-networkd[1242]: eth0: Link UP Dec 13 01:26:32.845149 systemd-networkd[1242]: eth0: Gained carrier Dec 13 01:26:32.845164 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:32.877423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:32.879570 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:32.885061 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:26:32.890404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:32.890918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:32.896494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:32.896663 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:26:32.910880 kernel: kvm_amd: TSC scaling supported Dec 13 01:26:32.910923 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:26:32.910936 kernel: kvm_amd: Nested Paging enabled Dec 13 01:26:32.911824 kernel: kvm_amd: LBR virtualization supported Dec 13 01:26:32.911858 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:26:32.912836 kernel: kvm_amd: Virtual GIF supported Dec 13 01:26:32.943649 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:26:32.956764 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:32.970945 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:26:32.983887 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:26:32.993271 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:33.022483 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:26:33.024005 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:33.034775 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:26:33.041155 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:33.081743 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:26:33.083152 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:26:33.084434 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:26:33.084460 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:33.085525 systemd[1]: Reached target machines.target - Containers. Dec 13 01:26:33.087577 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:26:33.098749 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:26:33.101373 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:26:33.102495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:33.103397 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:26:33.108906 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:26:33.112183 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:26:33.114343 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:26:33.119501 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:26:33.125646 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:26:33.134876 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:26:33.136761 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:26:33.147644 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:26:33.168652 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:26:33.237650 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:26:33.296653 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:26:33.309660 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 01:26:33.316644 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:26:33.322998 (sd-merge)[1309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:26:33.323759 (sd-merge)[1309]: Merged extensions into '/usr'. Dec 13 01:26:33.391051 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:26:33.391067 systemd[1]: Reloading... Dec 13 01:26:33.453680 zram_generator::config[1334]: No configuration found. Dec 13 01:26:33.542287 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:26:33.714537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:33.778838 systemd[1]: Reloading finished in 387 ms. Dec 13 01:26:33.796506 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:26:33.798094 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:26:33.812778 systemd[1]: Starting ensure-sysext.service... Dec 13 01:26:33.814924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:33.819062 systemd[1]: Reloading requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:26:33.819079 systemd[1]: Reloading... Dec 13 01:26:33.845826 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:26:33.846187 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:26:33.848331 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:26:33.848646 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Dec 13 01:26:33.848741 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Dec 13 01:26:33.852565 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:33.852580 systemd-tmpfiles[1382]: Skipping /boot Dec 13 01:26:33.867054 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:33.867072 systemd-tmpfiles[1382]: Skipping /boot Dec 13 01:26:33.874747 zram_generator::config[1414]: No configuration found. Dec 13 01:26:33.991529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:34.057733 systemd[1]: Reloading finished in 238 ms. Dec 13 01:26:34.078823 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:34.095772 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:34.098570 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:26:34.101093 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:26:34.104851 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:34.109973 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:26:34.117265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:34.117433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:34.119371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:34.123930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:34.127812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:34.129012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:34.129121 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:34.130343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:34.130564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:34.137560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:34.137809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:34.142500 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:34.147710 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:34.148004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:34.154098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:34.154498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:34.185358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:34.186816 systemd-networkd[1242]: eth0: Gained IPv6LL Dec 13 01:26:34.191900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:34.197729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:34.199712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:34.199951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:34.201849 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:26:34.202205 augenrules[1494]: No rules Dec 13 01:26:34.203926 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:34.211077 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:26:34.213135 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:26:34.215087 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:26:34.216840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:34.217064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:34.218704 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:34.218908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:34.220719 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:34.220952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:34.231187 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:34.231358 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:34.244976 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:26:34.246063 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:26:34.248590 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:34.249228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:34.250546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:34.254849 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:34.256936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:34.262402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:34.262958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:34.263068 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:26:34.263139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:34.264184 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:26:34.268292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:34.268500 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:34.270337 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:34.270593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:34.272236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:34.272448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:34.277541 systemd[1]: Finished ensure-sysext.service. Dec 13 01:26:34.282431 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:34.284462 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:26:34.285882 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:34.286113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:34.288177 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:34.315821 systemd-resolved[1459]: Positive Trust Anchors: Dec 13 01:26:34.315840 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:34.315874 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:34.319552 systemd-resolved[1459]: Defaulting to hostname 'linux'. Dec 13 01:26:34.321995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:34.323341 systemd[1]: Reached target network.target - Network. Dec 13 01:26:34.324256 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:26:34.325335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:34.369013 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:26:34.370554 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:34.902096 systemd-resolved[1459]: Clock change detected. Flushing caches. Dec 13 01:26:34.902127 systemd-timesyncd[1527]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:26:34.902167 systemd-timesyncd[1527]: Initial clock synchronization to Fri 2024-12-13 01:26:34.902039 UTC. Dec 13 01:26:34.902939 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:26:34.904218 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:26:34.905459 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:26:34.906725 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:26:34.906753 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:34.907664 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:26:34.908932 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:26:34.910108 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:26:34.911349 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:34.913157 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:26:34.916145 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:26:34.918428 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:26:34.930055 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:26:34.931172 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:34.932134 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:34.933216 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:26:34.933252 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:34.933274 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:34.934471 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:26:34.936569 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:26:34.938646 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:26:34.941384 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:26:34.946391 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:26:34.947662 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:26:34.950665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:34.952319 jq[1536]: false Dec 13 01:26:34.954998 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:26:34.957886 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:26:34.962128 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:26:34.966811 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:26:34.969991 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:26:34.977215 extend-filesystems[1539]: Found loop3 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found loop4 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found loop5 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found sr0 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found vda Dec 13 01:26:34.981073 extend-filesystems[1539]: Found vda1 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found vda2 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found vda3 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found usr Dec 13 01:26:34.981073 extend-filesystems[1539]: Found vda4 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found vda6 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found vda7 Dec 13 01:26:34.981073 extend-filesystems[1539]: Found vda9 Dec 13 01:26:34.981073 extend-filesystems[1539]: Checking size of /dev/vda9 Dec 13 01:26:34.977515 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:26:34.998651 extend-filesystems[1539]: Resized partition /dev/vda9 Dec 13 01:26:34.985866 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:26:34.996978 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:26:35.001818 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:26:35.015811 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1241) Dec 13 01:26:35.011103 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:26:35.011405 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:26:35.016087 jq[1567]: true Dec 13 01:26:35.017107 extend-filesystems[1564]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:26:35.024595 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:26:35.019966 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:26:35.020253 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:26:35.022370 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:26:35.028609 dbus-daemon[1535]: [system] SELinux support is enabled Dec 13 01:26:35.034128 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:26:35.039454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:26:35.039775 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:26:35.051741 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:26:35.058970 jq[1581]: true Dec 13 01:26:35.071775 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:26:35.072172 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:26:35.089359 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:26:35.089909 tar[1580]: linux-amd64/helm Dec 13 01:26:35.110995 update_engine[1565]: I20241213 01:26:35.085129 1565 main.cc:92] Flatcar Update Engine starting Dec 13 01:26:35.110995 update_engine[1565]: I20241213 01:26:35.091006 1565 update_check_scheduler.cc:74] Next update check in 10m28s Dec 13 01:26:35.097213 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:26:35.112329 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:26:35.112329 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:26:35.112329 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:26:35.099865 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:26:35.170153 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:26:35.170265 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Dec 13 01:26:35.099952 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:26:35.099972 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:26:35.101385 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:26:35.101400 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:26:35.103498 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:26:35.110767 systemd-logind[1557]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:26:35.110788 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:26:35.176930 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:26:35.112044 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:26:35.117944 systemd-logind[1557]: New seat seat0. Dec 13 01:26:35.153380 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:26:35.153695 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:26:35.167728 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:26:35.179867 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:26:35.182564 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:26:35.189407 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:26:35.202783 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:26:35.210061 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:26:35.239099 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:26:35.239534 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:26:35.244198 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:26:35.316965 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:26:35.327095 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:26:35.329240 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:26:35.330579 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:26:35.470071 containerd[1582]: time="2024-12-13T01:26:35.469888457Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:26:35.494947 containerd[1582]: time="2024-12-13T01:26:35.494893440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497161 containerd[1582]: time="2024-12-13T01:26:35.497028364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497161 containerd[1582]: time="2024-12-13T01:26:35.497076063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:26:35.497161 containerd[1582]: time="2024-12-13T01:26:35.497096251Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:26:35.497322 containerd[1582]: time="2024-12-13T01:26:35.497289193Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:26:35.497322 containerd[1582]: time="2024-12-13T01:26:35.497312868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497398 containerd[1582]: time="2024-12-13T01:26:35.497383961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497421 containerd[1582]: time="2024-12-13T01:26:35.497397576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497687 containerd[1582]: time="2024-12-13T01:26:35.497665018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497687 containerd[1582]: time="2024-12-13T01:26:35.497684725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497734 containerd[1582]: time="2024-12-13T01:26:35.497698180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497734 containerd[1582]: time="2024-12-13T01:26:35.497708469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:35.497918 containerd[1582]: time="2024-12-13T01:26:35.497819177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:35.498095 containerd[1582]: time="2024-12-13T01:26:35.498075698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:35.498272 containerd[1582]: time="2024-12-13T01:26:35.498256447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:35.498272 containerd[1582]: time="2024-12-13T01:26:35.498271014Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:26:35.498458 containerd[1582]: time="2024-12-13T01:26:35.498367495Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:26:35.498458 containerd[1582]: time="2024-12-13T01:26:35.498429612Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.503400063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.503450548Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.503466698Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.503480885Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.503494570Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.503656093Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.504098062Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.504216625Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.504233306Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.504254896Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.504279472Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.504294731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.504309078Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:26:35.570692 containerd[1582]: time="2024-12-13T01:26:35.504330739Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504345656Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504358611Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504371745Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504391593Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504416349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504430425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504443370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504457647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504471452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504484557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504496840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504509905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504531024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571145 containerd[1582]: time="2024-12-13T01:26:35.504549829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504562794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504574987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504589624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504605764Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504624930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504637313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504648695Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504740407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504766776Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504787245Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504818563Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504835415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504853288Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:26:35.571491 containerd[1582]: time="2024-12-13T01:26:35.504864279Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:26:35.571820 containerd[1582]: time="2024-12-13T01:26:35.504875620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.505284677Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.505353977Z" level=info msg="Connect containerd service" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.505413559Z" level=info msg="using legacy CRI server" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.505424710Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.505539796Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.506176991Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.506454541Z" level=info msg="Start subscribing containerd event" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.571240669Z" level=info msg="Start recovering state" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.571616223Z" level=info msg="Start event monitor" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.571644466Z" level=info msg="Start snapshots syncer" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.571663261Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.571676626Z" level=info msg="Start streaming server" Dec 13 01:26:35.571869 containerd[1582]: time="2024-12-13T01:26:35.571370282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:26:35.572393 containerd[1582]: time="2024-12-13T01:26:35.571914152Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:26:35.572393 containerd[1582]: time="2024-12-13T01:26:35.571982219Z" level=info msg="containerd successfully booted in 0.103937s" Dec 13 01:26:35.572236 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:26:35.742358 tar[1580]: linux-amd64/LICENSE Dec 13 01:26:35.742477 tar[1580]: linux-amd64/README.md Dec 13 01:26:35.757187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:26:36.361925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:36.363560 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:26:36.364970 systemd[1]: Startup finished in 6.113s (kernel) + 4.750s (userspace) = 10.864s. Dec 13 01:26:36.373385 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:37.292180 kubelet[1667]: E1213 01:26:37.292075 1667 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:37.296645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:37.296945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:44.228973 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:26:44.244102 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:34952.service - OpenSSH per-connection server daemon (10.0.0.1:34952). Dec 13 01:26:44.290408 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 34952 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:44.292601 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:44.303234 systemd-logind[1557]: New session 1 of user core. Dec 13 01:26:44.304720 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:26:44.327113 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:26:44.340855 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:26:44.343737 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:26:44.351361 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:26:44.459870 systemd[1687]: Queued start job for default target default.target. Dec 13 01:26:44.460257 systemd[1687]: Created slice app.slice - User Application Slice. Dec 13 01:26:44.460279 systemd[1687]: Reached target paths.target - Paths. Dec 13 01:26:44.460291 systemd[1687]: Reached target timers.target - Timers. Dec 13 01:26:44.470931 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:26:44.479208 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:26:44.479275 systemd[1687]: Reached target sockets.target - Sockets. Dec 13 01:26:44.479289 systemd[1687]: Reached target basic.target - Basic System. Dec 13 01:26:44.479325 systemd[1687]: Reached target default.target - Main User Target. Dec 13 01:26:44.479357 systemd[1687]: Startup finished in 121ms. Dec 13 01:26:44.479895 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:26:44.481318 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:26:44.539142 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:34960.service - OpenSSH per-connection server daemon (10.0.0.1:34960). Dec 13 01:26:44.577321 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 34960 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:44.579329 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:44.584423 systemd-logind[1557]: New session 2 of user core. Dec 13 01:26:44.596053 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:26:44.653329 sshd[1699]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:44.663054 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:34976.service - OpenSSH per-connection server daemon (10.0.0.1:34976). Dec 13 01:26:44.663567 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:34960.service: Deactivated successfully. Dec 13 01:26:44.666270 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:26:44.667955 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:26:44.668642 systemd-logind[1557]: Removed session 2. Dec 13 01:26:44.698644 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 34976 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:44.700630 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:44.705803 systemd-logind[1557]: New session 3 of user core. Dec 13 01:26:44.722240 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:26:44.774109 sshd[1704]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:44.784056 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:34992.service - OpenSSH per-connection server daemon (10.0.0.1:34992). Dec 13 01:26:44.784650 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:34976.service: Deactivated successfully. Dec 13 01:26:44.787743 systemd-logind[1557]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:26:44.788950 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:26:44.790066 systemd-logind[1557]: Removed session 3. Dec 13 01:26:44.818193 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 34992 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:44.819979 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:44.824481 systemd-logind[1557]: New session 4 of user core. Dec 13 01:26:44.839239 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:26:44.894319 sshd[1712]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:44.904091 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:35002.service - OpenSSH per-connection server daemon (10.0.0.1:35002). Dec 13 01:26:44.904914 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:34992.service: Deactivated successfully. Dec 13 01:26:44.907115 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:26:44.907970 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:26:44.909783 systemd-logind[1557]: Removed session 4. Dec 13 01:26:44.936009 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 35002 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:44.937691 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:44.942332 systemd-logind[1557]: New session 5 of user core. Dec 13 01:26:44.952202 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:26:45.011713 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:26:45.012091 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:45.027895 sudo[1727]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:45.029943 sshd[1720]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:45.041172 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:35008.service - OpenSSH per-connection server daemon (10.0.0.1:35008). Dec 13 01:26:45.041866 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:35002.service: Deactivated successfully. Dec 13 01:26:45.043732 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:26:45.044531 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:26:45.045971 systemd-logind[1557]: Removed session 5. Dec 13 01:26:45.074508 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 35008 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:45.076145 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:45.080176 systemd-logind[1557]: New session 6 of user core. Dec 13 01:26:45.096060 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:26:45.149113 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:26:45.149511 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:45.153170 sudo[1737]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:45.159611 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:26:45.159968 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:45.182066 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:45.183767 auditctl[1740]: No rules Dec 13 01:26:45.185031 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:26:45.185345 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:45.187179 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:45.223658 augenrules[1759]: No rules Dec 13 01:26:45.225705 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:45.227514 sudo[1736]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:45.229762 sshd[1729]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:45.241052 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:35016.service - OpenSSH per-connection server daemon (10.0.0.1:35016). Dec 13 01:26:45.241508 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:35008.service: Deactivated successfully. Dec 13 01:26:45.244574 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:26:45.245828 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:26:45.246963 systemd-logind[1557]: Removed session 6. Dec 13 01:26:45.273370 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 35016 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:45.274908 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:45.279150 systemd-logind[1557]: New session 7 of user core. Dec 13 01:26:45.289209 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:26:45.343554 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:26:45.343916 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:45.637106 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:26:45.637304 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:26:45.919817 dockerd[1791]: time="2024-12-13T01:26:45.919650571Z" level=info msg="Starting up" Dec 13 01:26:46.688726 dockerd[1791]: time="2024-12-13T01:26:46.688653155Z" level=info msg="Loading containers: start." Dec 13 01:26:46.807829 kernel: Initializing XFRM netlink socket Dec 13 01:26:46.903813 systemd-networkd[1242]: docker0: Link UP Dec 13 01:26:46.949779 dockerd[1791]: time="2024-12-13T01:26:46.949628627Z" level=info msg="Loading containers: done." Dec 13 01:26:46.979590 dockerd[1791]: time="2024-12-13T01:26:46.979488586Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:26:46.979831 dockerd[1791]: time="2024-12-13T01:26:46.979658905Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:26:46.979863 dockerd[1791]: time="2024-12-13T01:26:46.979837159Z" level=info msg="Daemon has completed initialization" Dec 13 01:26:47.021719 dockerd[1791]: time="2024-12-13T01:26:47.021634023Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:26:47.021906 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:26:47.299081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:47.312227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:47.489996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:47.494326 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:47.555031 kubelet[1951]: E1213 01:26:47.554619 1951 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:47.563131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:47.563399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:47.882206 containerd[1582]: time="2024-12-13T01:26:47.882044320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:26:49.915986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163462825.mount: Deactivated successfully. Dec 13 01:26:52.785817 containerd[1582]: time="2024-12-13T01:26:52.785706637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:52.786451 containerd[1582]: time="2024-12-13T01:26:52.786393385Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:26:52.787646 containerd[1582]: time="2024-12-13T01:26:52.787609035Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:52.790507 containerd[1582]: time="2024-12-13T01:26:52.790444813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:52.791598 containerd[1582]: time="2024-12-13T01:26:52.791564733Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 4.909466201s" Dec 13 01:26:52.791637 containerd[1582]: time="2024-12-13T01:26:52.791612442Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:26:52.817029 containerd[1582]: time="2024-12-13T01:26:52.816980727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:26:55.253914 containerd[1582]: time="2024-12-13T01:26:55.253836942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:55.254588 containerd[1582]: time="2024-12-13T01:26:55.254540722Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:26:55.255813 containerd[1582]: time="2024-12-13T01:26:55.255762634Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:55.258468 containerd[1582]: time="2024-12-13T01:26:55.258419216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:55.260327 containerd[1582]: time="2024-12-13T01:26:55.260279695Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.443255155s" Dec 13 01:26:55.260327 containerd[1582]: time="2024-12-13T01:26:55.260321333Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:26:55.285061 containerd[1582]: time="2024-12-13T01:26:55.285020342Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:26:56.624689 containerd[1582]: time="2024-12-13T01:26:56.624589573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:56.625454 containerd[1582]: time="2024-12-13T01:26:56.625383492Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:26:56.626743 containerd[1582]: time="2024-12-13T01:26:56.626716391Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:56.630212 containerd[1582]: time="2024-12-13T01:26:56.630165700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:56.631592 containerd[1582]: time="2024-12-13T01:26:56.631545417Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.346474631s" Dec 13 01:26:56.631645 containerd[1582]: time="2024-12-13T01:26:56.631594239Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:26:56.668073 containerd[1582]: time="2024-12-13T01:26:56.668033886Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:26:57.813705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:26:57.830023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:58.036370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:58.091324 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:58.426167 kubelet[2064]: E1213 01:26:58.425956 2064 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:58.431931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:58.432268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:59.364416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738308370.mount: Deactivated successfully. Dec 13 01:27:02.124570 containerd[1582]: time="2024-12-13T01:27:02.124421731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:02.125837 containerd[1582]: time="2024-12-13T01:27:02.125700580Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:27:02.131082 containerd[1582]: time="2024-12-13T01:27:02.130770558Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:02.159884 containerd[1582]: time="2024-12-13T01:27:02.147619375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:02.159884 containerd[1582]: time="2024-12-13T01:27:02.148687909Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 5.480608347s" Dec 13 01:27:02.159884 containerd[1582]: time="2024-12-13T01:27:02.148731431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:27:02.245998 containerd[1582]: time="2024-12-13T01:27:02.237675092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:27:03.052433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822321604.mount: Deactivated successfully. Dec 13 01:27:05.879407 containerd[1582]: time="2024-12-13T01:27:05.879343590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:05.880272 containerd[1582]: time="2024-12-13T01:27:05.880220214Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:27:05.882331 containerd[1582]: time="2024-12-13T01:27:05.882254269Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:05.886286 containerd[1582]: time="2024-12-13T01:27:05.886217361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:05.887828 containerd[1582]: time="2024-12-13T01:27:05.887696895Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.649966821s" Dec 13 01:27:05.887828 containerd[1582]: time="2024-12-13T01:27:05.887774852Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:27:05.923063 containerd[1582]: time="2024-12-13T01:27:05.922934859Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:27:06.523938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994115254.mount: Deactivated successfully. Dec 13 01:27:06.532183 containerd[1582]: time="2024-12-13T01:27:06.532090513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.533564 containerd[1582]: time="2024-12-13T01:27:06.533494056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:27:06.535121 containerd[1582]: time="2024-12-13T01:27:06.535044583Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.538520 containerd[1582]: time="2024-12-13T01:27:06.538449689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.539468 containerd[1582]: time="2024-12-13T01:27:06.539397537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 616.341861ms" Dec 13 01:27:06.539468 containerd[1582]: time="2024-12-13T01:27:06.539457870Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:27:06.564945 containerd[1582]: time="2024-12-13T01:27:06.564900905Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:27:07.134603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333848489.mount: Deactivated successfully. Dec 13 01:27:08.503447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:27:08.512501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:08.771240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:08.789266 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:08.980367 kubelet[2206]: E1213 01:27:08.980275 2206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:08.985925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:08.986279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:10.089535 containerd[1582]: time="2024-12-13T01:27:10.089366945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:10.095844 containerd[1582]: time="2024-12-13T01:27:10.095682953Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:27:10.098195 containerd[1582]: time="2024-12-13T01:27:10.098119670Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:10.105634 containerd[1582]: time="2024-12-13T01:27:10.105549735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:10.107695 containerd[1582]: time="2024-12-13T01:27:10.107625398Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.542672324s" Dec 13 01:27:10.107695 containerd[1582]: time="2024-12-13T01:27:10.107684492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:27:12.783863 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:12.794160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:12.813508 systemd[1]: Reloading requested from client PID 2302 ('systemctl') (unit session-7.scope)... Dec 13 01:27:12.813547 systemd[1]: Reloading... Dec 13 01:27:12.912834 zram_generator::config[2344]: No configuration found. Dec 13 01:27:13.159236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:13.237343 systemd[1]: Reloading finished in 423 ms. Dec 13 01:27:13.285172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:13.289681 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:13.290916 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:13.291339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:13.293546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:13.440813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:13.445589 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:13.489362 kubelet[2404]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:13.489362 kubelet[2404]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:13.489362 kubelet[2404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:13.489850 kubelet[2404]: I1213 01:27:13.489425 2404 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:13.882359 kubelet[2404]: I1213 01:27:13.882322 2404 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:13.882359 kubelet[2404]: I1213 01:27:13.882355 2404 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:13.882601 kubelet[2404]: I1213 01:27:13.882588 2404 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:13.901468 kubelet[2404]: E1213 01:27:13.901409 2404 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.902593 kubelet[2404]: I1213 01:27:13.902541 2404 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:13.916657 kubelet[2404]: I1213 01:27:13.916629 2404 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:13.918534 kubelet[2404]: I1213 01:27:13.918512 2404 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:13.918712 kubelet[2404]: I1213 01:27:13.918694 2404 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:13.918823 kubelet[2404]: I1213 01:27:13.918727 2404 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:13.918823 kubelet[2404]: I1213 01:27:13.918736 2404 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:13.918909 kubelet[2404]: I1213 01:27:13.918892 2404 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:13.919027 kubelet[2404]: I1213 01:27:13.919013 2404 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:13.919060 kubelet[2404]: I1213 01:27:13.919031 2404 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:13.919090 kubelet[2404]: I1213 01:27:13.919071 2404 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:13.919114 kubelet[2404]: I1213 01:27:13.919109 2404 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:13.920103 kubelet[2404]: W1213 01:27:13.919996 2404 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.920161 kubelet[2404]: E1213 01:27:13.920115 2404 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.920531 kubelet[2404]: W1213 01:27:13.920467 2404 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.920587 kubelet[2404]: E1213 01:27:13.920534 2404 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.920657 kubelet[2404]: I1213 01:27:13.920629 2404 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:13.924243 kubelet[2404]: I1213 01:27:13.924221 2404 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:13.925276 kubelet[2404]: W1213 01:27:13.925257 2404 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:27:13.926446 kubelet[2404]: I1213 01:27:13.926429 2404 server.go:1256] "Started kubelet" Dec 13 01:27:13.926545 kubelet[2404]: I1213 01:27:13.926526 2404 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:13.927052 kubelet[2404]: I1213 01:27:13.927013 2404 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:13.927463 kubelet[2404]: I1213 01:27:13.927437 2404 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:13.931121 kubelet[2404]: I1213 01:27:13.928765 2404 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:13.931121 kubelet[2404]: I1213 01:27:13.928881 2404 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:13.931121 kubelet[2404]: I1213 01:27:13.929501 2404 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:13.931121 kubelet[2404]: I1213 01:27:13.930535 2404 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:13.931121 kubelet[2404]: I1213 01:27:13.930583 2404 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:13.931121 kubelet[2404]: E1213 01:27:13.930738 2404 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:27:13.931620 kubelet[2404]: W1213 01:27:13.931565 2404 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.931671 kubelet[2404]: E1213 01:27:13.931631 2404 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.931837 kubelet[2404]: E1213 01:27:13.931818 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Dec 13 01:27:13.933859 kubelet[2404]: I1213 01:27:13.933172 2404 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:13.933859 kubelet[2404]: I1213 01:27:13.933246 2404 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:13.933859 kubelet[2404]: E1213 01:27:13.933837 2404 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810983a95cd0482 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:27:13.926407298 +0000 UTC m=+0.476226657,LastTimestamp:2024-12-13 01:27:13.926407298 +0000 UTC m=+0.476226657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:27:13.934575 kubelet[2404]: I1213 01:27:13.934561 2404 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:13.934696 kubelet[2404]: E1213 01:27:13.934684 2404 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:13.957373 kubelet[2404]: I1213 01:27:13.957277 2404 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:13.959207 kubelet[2404]: I1213 01:27:13.959186 2404 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:13.959266 kubelet[2404]: I1213 01:27:13.959236 2404 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:13.959266 kubelet[2404]: I1213 01:27:13.959266 2404 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:13.959345 kubelet[2404]: E1213 01:27:13.959330 2404 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:13.960137 kubelet[2404]: W1213 01:27:13.960110 2404 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.960186 kubelet[2404]: E1213 01:27:13.960149 2404 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:13.970093 kubelet[2404]: I1213 01:27:13.970049 2404 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:13.970093 kubelet[2404]: I1213 01:27:13.970072 2404 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:13.970093 kubelet[2404]: I1213 01:27:13.970102 2404 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:14.033034 kubelet[2404]: I1213 01:27:14.032979 2404 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:14.033492 kubelet[2404]: E1213 01:27:14.033470 2404 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Dec 13 01:27:14.059744 kubelet[2404]: E1213 01:27:14.059668 2404 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:14.132759 kubelet[2404]: E1213 01:27:14.132613 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Dec 13 01:27:14.235852 kubelet[2404]: I1213 01:27:14.235787 2404 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:14.236447 kubelet[2404]: E1213 01:27:14.236394 2404 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Dec 13 01:27:14.260639 kubelet[2404]: E1213 01:27:14.260518 2404 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:14.284009 kubelet[2404]: I1213 01:27:14.283943 2404 policy_none.go:49] "None policy: Start" Dec 13 01:27:14.285237 kubelet[2404]: I1213 01:27:14.285208 2404 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:14.285237 kubelet[2404]: I1213 01:27:14.285238 2404 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:14.295058 kubelet[2404]: I1213 01:27:14.295013 2404 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:14.295401 kubelet[2404]: I1213 01:27:14.295378 2404 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:14.297037 kubelet[2404]: E1213 01:27:14.297016 2404 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:27:14.533860 kubelet[2404]: E1213 01:27:14.533815 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Dec 13 01:27:14.639011 kubelet[2404]: I1213 01:27:14.638953 2404 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:14.639526 kubelet[2404]: E1213 01:27:14.639477 2404 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Dec 13 01:27:14.661753 kubelet[2404]: I1213 01:27:14.661654 2404 topology_manager.go:215] "Topology Admit Handler" podUID="82150330566e9deab89c4a067bdbe6c3" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:27:14.663665 kubelet[2404]: I1213 01:27:14.663606 2404 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:27:14.665150 kubelet[2404]: I1213 01:27:14.665102 2404 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:27:14.736774 kubelet[2404]: I1213 01:27:14.736496 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:14.736774 kubelet[2404]: I1213 01:27:14.736562 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:14.736774 kubelet[2404]: I1213 01:27:14.736627 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:14.736774 kubelet[2404]: I1213 01:27:14.736718 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:14.736774 kubelet[2404]: I1213 01:27:14.736784 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:14.737102 kubelet[2404]: I1213 01:27:14.736818 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:14.737102 kubelet[2404]: I1213 01:27:14.736841 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:14.737102 kubelet[2404]: I1213 01:27:14.736864 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:14.737102 kubelet[2404]: I1213 01:27:14.736895 2404 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:27:14.817318 kubelet[2404]: W1213 01:27:14.817128 2404 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:14.817318 kubelet[2404]: E1213 01:27:14.817195 2404 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:14.880518 kubelet[2404]: W1213 01:27:14.880439 2404 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:14.880518 kubelet[2404]: E1213 01:27:14.880510 2404 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:14.969990 kubelet[2404]: E1213 01:27:14.969947 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:14.970674 kubelet[2404]: E1213 01:27:14.970653 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:14.970809 containerd[1582]: time="2024-12-13T01:27:14.970756450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:82150330566e9deab89c4a067bdbe6c3,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:14.971274 containerd[1582]: time="2024-12-13T01:27:14.971086712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:14.972547 kubelet[2404]: E1213 01:27:14.972530 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:14.972876 containerd[1582]: time="2024-12-13T01:27:14.972847035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:15.118717 kubelet[2404]: W1213 01:27:15.118441 2404 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:15.118717 kubelet[2404]: E1213 01:27:15.118538 2404 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:15.335487 kubelet[2404]: E1213 01:27:15.335426 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Dec 13 01:27:15.441894 kubelet[2404]: I1213 01:27:15.441735 2404 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:15.442477 kubelet[2404]: E1213 01:27:15.442430 2404 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Dec 13 01:27:15.510760 kubelet[2404]: W1213 01:27:15.510647 2404 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:15.510760 kubelet[2404]: E1213 01:27:15.510745 2404 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:15.891175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314822532.mount: Deactivated successfully. Dec 13 01:27:15.910742 kubelet[2404]: E1213 01:27:15.909035 2404 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.36:6443: connect: connection refused Dec 13 01:27:15.911391 containerd[1582]: time="2024-12-13T01:27:15.910073821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:15.919821 containerd[1582]: time="2024-12-13T01:27:15.919658310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:27:15.921309 containerd[1582]: time="2024-12-13T01:27:15.921205250Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:15.922736 containerd[1582]: time="2024-12-13T01:27:15.922384477Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:15.923755 containerd[1582]: time="2024-12-13T01:27:15.923670688Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:15.925675 containerd[1582]: time="2024-12-13T01:27:15.925574551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:15.926901 containerd[1582]: time="2024-12-13T01:27:15.926844020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:15.930230 containerd[1582]: time="2024-12-13T01:27:15.930183701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:15.930958 containerd[1582]: time="2024-12-13T01:27:15.930915741Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 959.708387ms" Dec 13 01:27:15.934389 containerd[1582]: time="2024-12-13T01:27:15.934068954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 961.15948ms" Dec 13 01:27:15.935028 containerd[1582]: time="2024-12-13T01:27:15.934949770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 964.054604ms" Dec 13 01:27:16.424651 containerd[1582]: time="2024-12-13T01:27:16.424314354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:16.424651 containerd[1582]: time="2024-12-13T01:27:16.424570284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:16.424651 containerd[1582]: time="2024-12-13T01:27:16.424243139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:16.424651 containerd[1582]: time="2024-12-13T01:27:16.424368919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:16.424651 containerd[1582]: time="2024-12-13T01:27:16.424386433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:16.424651 containerd[1582]: time="2024-12-13T01:27:16.424513586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:16.425497 containerd[1582]: time="2024-12-13T01:27:16.424755257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:16.426953 containerd[1582]: time="2024-12-13T01:27:16.426033140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:16.431530 containerd[1582]: time="2024-12-13T01:27:16.431085445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:16.431530 containerd[1582]: time="2024-12-13T01:27:16.431182290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:16.431530 containerd[1582]: time="2024-12-13T01:27:16.431222377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:16.431530 containerd[1582]: time="2024-12-13T01:27:16.431386431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:16.592515 containerd[1582]: time="2024-12-13T01:27:16.592367018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c7cce3a641152df60c9ef29a429191e4cf25e3788aa11a60abadf75f99c43fb\"" Dec 13 01:27:16.600837 kubelet[2404]: E1213 01:27:16.598878 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:16.603172 containerd[1582]: time="2024-12-13T01:27:16.603107264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:82150330566e9deab89c4a067bdbe6c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb610841184f35aefeb83e8f6734acb266b2499f37c9e0fce1c71b3206b66ed9\"" Dec 13 01:27:16.605386 kubelet[2404]: E1213 01:27:16.605348 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:16.607265 containerd[1582]: time="2024-12-13T01:27:16.607191529Z" level=info msg="CreateContainer within sandbox \"2c7cce3a641152df60c9ef29a429191e4cf25e3788aa11a60abadf75f99c43fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:27:16.610203 containerd[1582]: time="2024-12-13T01:27:16.610156786Z" level=info msg="CreateContainer within sandbox \"cb610841184f35aefeb83e8f6734acb266b2499f37c9e0fce1c71b3206b66ed9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:27:16.612127 containerd[1582]: time="2024-12-13T01:27:16.612067307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"51940251e409daa010741f5cbfd2c969d868cc95b3efd5bb44d834693d8aa9d4\"" Dec 13 01:27:16.613014 kubelet[2404]: E1213 01:27:16.612953 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:16.616364 containerd[1582]: time="2024-12-13T01:27:16.616302300Z" level=info msg="CreateContainer within sandbox \"51940251e409daa010741f5cbfd2c969d868cc95b3efd5bb44d834693d8aa9d4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:27:16.745437 containerd[1582]: time="2024-12-13T01:27:16.744604617Z" level=info msg="CreateContainer within sandbox \"cb610841184f35aefeb83e8f6734acb266b2499f37c9e0fce1c71b3206b66ed9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db735759b0fbe3b079e2bca8b8a9ccc4ce5fa2e38f8cccbc654ac87f0679812b\"" Dec 13 01:27:16.745884 containerd[1582]: time="2024-12-13T01:27:16.745851149Z" level=info msg="StartContainer for \"db735759b0fbe3b079e2bca8b8a9ccc4ce5fa2e38f8cccbc654ac87f0679812b\"" Dec 13 01:27:16.747189 containerd[1582]: time="2024-12-13T01:27:16.746980027Z" level=info msg="CreateContainer within sandbox \"51940251e409daa010741f5cbfd2c969d868cc95b3efd5bb44d834693d8aa9d4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c1a8c583fcf5406d244f41039a837644995a68c5f03ac73245a2f299917c6db0\"" Dec 13 01:27:16.749107 containerd[1582]: time="2024-12-13T01:27:16.747664014Z" level=info msg="StartContainer for \"c1a8c583fcf5406d244f41039a837644995a68c5f03ac73245a2f299917c6db0\"" Dec 13 01:27:16.752633 containerd[1582]: time="2024-12-13T01:27:16.752470159Z" level=info msg="CreateContainer within sandbox \"2c7cce3a641152df60c9ef29a429191e4cf25e3788aa11a60abadf75f99c43fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5dc6e3c20593b6b05598fee8b718a9ec1f37e942be5e394ad0472c74f0e3ab6e\"" Dec 13 01:27:16.754317 containerd[1582]: time="2024-12-13T01:27:16.754289005Z" level=info msg="StartContainer for \"5dc6e3c20593b6b05598fee8b718a9ec1f37e942be5e394ad0472c74f0e3ab6e\"" Dec 13 01:27:16.936036 kubelet[2404]: E1213 01:27:16.935998 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="3.2s" Dec 13 01:27:16.963262 containerd[1582]: time="2024-12-13T01:27:16.963162300Z" level=info msg="StartContainer for \"db735759b0fbe3b079e2bca8b8a9ccc4ce5fa2e38f8cccbc654ac87f0679812b\" returns successfully" Dec 13 01:27:16.963448 containerd[1582]: time="2024-12-13T01:27:16.963187429Z" level=info msg="StartContainer for \"c1a8c583fcf5406d244f41039a837644995a68c5f03ac73245a2f299917c6db0\" returns successfully" Dec 13 01:27:16.963448 containerd[1582]: time="2024-12-13T01:27:16.963193490Z" level=info msg="StartContainer for \"5dc6e3c20593b6b05598fee8b718a9ec1f37e942be5e394ad0472c74f0e3ab6e\" returns successfully" Dec 13 01:27:16.980209 kubelet[2404]: E1213 01:27:16.980151 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:16.985939 kubelet[2404]: E1213 01:27:16.985887 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:16.990862 kubelet[2404]: E1213 01:27:16.990002 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:17.047386 kubelet[2404]: I1213 01:27:17.046857 2404 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:17.991179 kubelet[2404]: E1213 01:27:17.991139 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:18.663884 kubelet[2404]: I1213 01:27:18.663844 2404 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:27:18.923366 kubelet[2404]: I1213 01:27:18.922449 2404 apiserver.go:52] "Watching apiserver" Dec 13 01:27:18.933448 kubelet[2404]: I1213 01:27:18.932851 2404 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:18.998243 kubelet[2404]: E1213 01:27:18.998181 2404 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:18.998921 kubelet[2404]: E1213 01:27:18.998892 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:20.509122 update_engine[1565]: I20241213 01:27:20.508980 1565 update_attempter.cc:509] Updating boot flags... Dec 13 01:27:20.537860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2686) Dec 13 01:27:20.584985 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2686) Dec 13 01:27:20.674828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2686) Dec 13 01:27:21.610280 systemd[1]: Reloading requested from client PID 2695 ('systemctl') (unit session-7.scope)... Dec 13 01:27:21.610294 systemd[1]: Reloading... Dec 13 01:27:21.708832 zram_generator::config[2740]: No configuration found. Dec 13 01:27:21.824769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:21.903973 systemd[1]: Reloading finished in 293 ms. Dec 13 01:27:21.940995 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:21.941157 kubelet[2404]: I1213 01:27:21.940994 2404 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:21.963477 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:21.963922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:21.973196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:22.120651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:22.137498 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:22.196241 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:22.196241 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:22.196241 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:22.196640 kubelet[2789]: I1213 01:27:22.196226 2789 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:22.201160 kubelet[2789]: I1213 01:27:22.201119 2789 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:22.201160 kubelet[2789]: I1213 01:27:22.201155 2789 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:22.201448 kubelet[2789]: I1213 01:27:22.201433 2789 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:22.202918 kubelet[2789]: I1213 01:27:22.202844 2789 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:27:22.206621 kubelet[2789]: I1213 01:27:22.206395 2789 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:22.215996 kubelet[2789]: I1213 01:27:22.215967 2789 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:22.218643 kubelet[2789]: I1213 01:27:22.216856 2789 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:22.218643 kubelet[2789]: I1213 01:27:22.217075 2789 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:22.218643 kubelet[2789]: I1213 01:27:22.217109 2789 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:22.218643 kubelet[2789]: I1213 01:27:22.217120 2789 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:22.218643 kubelet[2789]: I1213 01:27:22.217168 2789 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:22.218643 kubelet[2789]: I1213 01:27:22.217284 2789 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:22.218972 kubelet[2789]: I1213 01:27:22.217301 2789 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:22.218972 kubelet[2789]: I1213 01:27:22.217334 2789 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:22.218972 kubelet[2789]: I1213 01:27:22.217355 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:22.220376 kubelet[2789]: I1213 01:27:22.220029 2789 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:22.220444 kubelet[2789]: I1213 01:27:22.220401 2789 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:22.221069 kubelet[2789]: I1213 01:27:22.221042 2789 server.go:1256] "Started kubelet" Dec 13 01:27:22.221271 kubelet[2789]: I1213 01:27:22.221253 2789 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:22.221502 kubelet[2789]: I1213 01:27:22.221456 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:22.222546 kubelet[2789]: I1213 01:27:22.222528 2789 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:22.226418 kubelet[2789]: I1213 01:27:22.226398 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:22.232929 kubelet[2789]: I1213 01:27:22.226458 2789 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:22.233641 kubelet[2789]: I1213 01:27:22.233617 2789 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:22.241512 kubelet[2789]: I1213 01:27:22.241477 2789 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:22.241859 kubelet[2789]: I1213 01:27:22.241733 2789 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:22.243028 kubelet[2789]: I1213 01:27:22.242998 2789 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:22.243415 kubelet[2789]: I1213 01:27:22.243106 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:22.246817 kubelet[2789]: E1213 01:27:22.244198 2789 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:22.246817 kubelet[2789]: I1213 01:27:22.246466 2789 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:22.255832 kubelet[2789]: I1213 01:27:22.255654 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:22.257904 kubelet[2789]: I1213 01:27:22.257868 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:22.257978 kubelet[2789]: I1213 01:27:22.257911 2789 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:22.257978 kubelet[2789]: I1213 01:27:22.257936 2789 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:22.258044 kubelet[2789]: E1213 01:27:22.257996 2789 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:22.305356 kubelet[2789]: I1213 01:27:22.305319 2789 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:22.305356 kubelet[2789]: I1213 01:27:22.305350 2789 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:22.305356 kubelet[2789]: I1213 01:27:22.305371 2789 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:22.305885 kubelet[2789]: I1213 01:27:22.305582 2789 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:27:22.305885 kubelet[2789]: I1213 01:27:22.305615 2789 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:27:22.305885 kubelet[2789]: I1213 01:27:22.305625 2789 policy_none.go:49] "None policy: Start" Dec 13 01:27:22.306422 kubelet[2789]: I1213 01:27:22.306373 2789 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:22.306422 kubelet[2789]: I1213 01:27:22.306398 2789 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:22.306646 kubelet[2789]: I1213 01:27:22.306625 2789 state_mem.go:75] "Updated machine memory state" Dec 13 01:27:22.308243 kubelet[2789]: I1213 01:27:22.308215 2789 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:22.308510 kubelet[2789]: I1213 01:27:22.308483 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:22.339419 kubelet[2789]: I1213 01:27:22.339390 2789 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:22.347847 kubelet[2789]: I1213 01:27:22.347807 2789 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:27:22.347973 kubelet[2789]: I1213 01:27:22.347916 2789 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:27:22.358476 kubelet[2789]: I1213 01:27:22.358442 2789 topology_manager.go:215] "Topology Admit Handler" podUID="82150330566e9deab89c4a067bdbe6c3" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:27:22.358905 kubelet[2789]: I1213 01:27:22.358778 2789 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:27:22.358905 kubelet[2789]: I1213 01:27:22.358874 2789 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:27:22.542426 kubelet[2789]: I1213 01:27:22.542361 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:22.542426 kubelet[2789]: I1213 01:27:22.542419 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:22.542576 kubelet[2789]: I1213 01:27:22.542448 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:22.542576 kubelet[2789]: I1213 01:27:22.542474 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:27:22.542576 kubelet[2789]: I1213 01:27:22.542502 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:22.542576 kubelet[2789]: I1213 01:27:22.542526 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:22.542576 kubelet[2789]: I1213 01:27:22.542551 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:22.542715 kubelet[2789]: I1213 01:27:22.542575 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:22.542715 kubelet[2789]: I1213 01:27:22.542604 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:22.678982 kubelet[2789]: E1213 01:27:22.678305 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:22.678982 kubelet[2789]: E1213 01:27:22.678517 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:22.678982 kubelet[2789]: E1213 01:27:22.678784 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:23.219069 kubelet[2789]: I1213 01:27:23.219004 2789 apiserver.go:52] "Watching apiserver" Dec 13 01:27:23.242612 kubelet[2789]: I1213 01:27:23.242556 2789 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:23.270922 kubelet[2789]: E1213 01:27:23.270873 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:23.276893 kubelet[2789]: I1213 01:27:23.276840 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.276730383 podStartE2EDuration="1.276730383s" podCreationTimestamp="2024-12-13 01:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:23.26792054 +0000 UTC m=+1.125413439" watchObservedRunningTime="2024-12-13 01:27:23.276730383 +0000 UTC m=+1.134223282" Dec 13 01:27:23.279335 kubelet[2789]: E1213 01:27:23.279277 2789 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:23.280007 kubelet[2789]: E1213 01:27:23.279976 2789 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:23.280724 kubelet[2789]: E1213 01:27:23.280569 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:23.281328 kubelet[2789]: E1213 01:27:23.281290 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:23.292884 kubelet[2789]: I1213 01:27:23.291476 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.291384538 podStartE2EDuration="1.291384538s" podCreationTimestamp="2024-12-13 01:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:23.290441076 +0000 UTC m=+1.147933986" watchObservedRunningTime="2024-12-13 01:27:23.291384538 +0000 UTC m=+1.148877447" Dec 13 01:27:23.292884 kubelet[2789]: I1213 01:27:23.291690 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.291655011 podStartE2EDuration="1.291655011s" podCreationTimestamp="2024-12-13 01:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:23.277206747 +0000 UTC m=+1.134699646" watchObservedRunningTime="2024-12-13 01:27:23.291655011 +0000 UTC m=+1.149147910" Dec 13 01:27:24.272243 kubelet[2789]: E1213 01:27:24.272203 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:24.272752 kubelet[2789]: E1213 01:27:24.272425 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:25.690530 kubelet[2789]: E1213 01:27:25.690490 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:26.122662 kubelet[2789]: E1213 01:27:26.122631 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:26.275124 kubelet[2789]: E1213 01:27:26.275083 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:26.389925 sudo[1772]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:26.392260 sshd[1765]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:26.396611 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:35016.service: Deactivated successfully. Dec 13 01:27:26.398598 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:27:26.399377 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:27:26.400457 systemd-logind[1557]: Removed session 7. Dec 13 01:27:33.252753 kubelet[2789]: E1213 01:27:33.252709 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:34.775948 kubelet[2789]: I1213 01:27:34.775907 2789 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:27:34.776426 kubelet[2789]: I1213 01:27:34.776381 2789 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:27:34.776456 containerd[1582]: time="2024-12-13T01:27:34.776230753Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:27:34.909755 kubelet[2789]: I1213 01:27:34.909686 2789 topology_manager.go:215] "Topology Admit Handler" podUID="65a65524-4a1e-4498-bdac-4d939499397d" podNamespace="kube-system" podName="kube-proxy-7c6pj" Dec 13 01:27:35.015147 kubelet[2789]: I1213 01:27:35.015082 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65a65524-4a1e-4498-bdac-4d939499397d-lib-modules\") pod \"kube-proxy-7c6pj\" (UID: \"65a65524-4a1e-4498-bdac-4d939499397d\") " pod="kube-system/kube-proxy-7c6pj" Dec 13 01:27:35.015147 kubelet[2789]: I1213 01:27:35.015132 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65a65524-4a1e-4498-bdac-4d939499397d-xtables-lock\") pod \"kube-proxy-7c6pj\" (UID: \"65a65524-4a1e-4498-bdac-4d939499397d\") " pod="kube-system/kube-proxy-7c6pj" Dec 13 01:27:35.015147 kubelet[2789]: I1213 01:27:35.015152 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwc8\" (UniqueName: \"kubernetes.io/projected/65a65524-4a1e-4498-bdac-4d939499397d-kube-api-access-rcwc8\") pod \"kube-proxy-7c6pj\" (UID: \"65a65524-4a1e-4498-bdac-4d939499397d\") " pod="kube-system/kube-proxy-7c6pj" Dec 13 01:27:35.015373 kubelet[2789]: I1213 01:27:35.015177 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65a65524-4a1e-4498-bdac-4d939499397d-kube-proxy\") pod \"kube-proxy-7c6pj\" (UID: \"65a65524-4a1e-4498-bdac-4d939499397d\") " pod="kube-system/kube-proxy-7c6pj" Dec 13 01:27:35.262842 kubelet[2789]: E1213 01:27:35.262782 2789 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:27:35.262842 kubelet[2789]: E1213 01:27:35.262836 2789 projected.go:200] Error preparing data for projected volume kube-api-access-rcwc8 for pod kube-system/kube-proxy-7c6pj: configmap "kube-root-ca.crt" not found Dec 13 01:27:35.262980 kubelet[2789]: E1213 01:27:35.262920 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65a65524-4a1e-4498-bdac-4d939499397d-kube-api-access-rcwc8 podName:65a65524-4a1e-4498-bdac-4d939499397d nodeName:}" failed. No retries permitted until 2024-12-13 01:27:35.762889051 +0000 UTC m=+13.620381950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rcwc8" (UniqueName: "kubernetes.io/projected/65a65524-4a1e-4498-bdac-4d939499397d-kube-api-access-rcwc8") pod "kube-proxy-7c6pj" (UID: "65a65524-4a1e-4498-bdac-4d939499397d") : configmap "kube-root-ca.crt" not found Dec 13 01:27:35.695609 kubelet[2789]: E1213 01:27:35.695474 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:35.891270 kubelet[2789]: I1213 01:27:35.891006 2789 topology_manager.go:215] "Topology Admit Handler" podUID="48c8d02c-31d1-4e33-826d-6c8f6eb89b1f" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-ms7gp" Dec 13 01:27:35.922946 kubelet[2789]: I1213 01:27:35.922911 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lllld\" (UniqueName: \"kubernetes.io/projected/48c8d02c-31d1-4e33-826d-6c8f6eb89b1f-kube-api-access-lllld\") pod \"tigera-operator-c7ccbd65-ms7gp\" (UID: \"48c8d02c-31d1-4e33-826d-6c8f6eb89b1f\") " pod="tigera-operator/tigera-operator-c7ccbd65-ms7gp" Dec 13 01:27:35.922946 kubelet[2789]: I1213 01:27:35.922957 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48c8d02c-31d1-4e33-826d-6c8f6eb89b1f-var-lib-calico\") pod \"tigera-operator-c7ccbd65-ms7gp\" (UID: \"48c8d02c-31d1-4e33-826d-6c8f6eb89b1f\") " pod="tigera-operator/tigera-operator-c7ccbd65-ms7gp" Dec 13 01:27:36.113377 kubelet[2789]: E1213 01:27:36.113342 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:36.113952 containerd[1582]: time="2024-12-13T01:27:36.113915557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c6pj,Uid:65a65524-4a1e-4498-bdac-4d939499397d,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:36.147733 containerd[1582]: time="2024-12-13T01:27:36.147568819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:36.147733 containerd[1582]: time="2024-12-13T01:27:36.147717850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:36.147733 containerd[1582]: time="2024-12-13T01:27:36.147744170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.148122 containerd[1582]: time="2024-12-13T01:27:36.147900665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.200782 containerd[1582]: time="2024-12-13T01:27:36.200727894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ms7gp,Uid:48c8d02c-31d1-4e33-826d-6c8f6eb89b1f,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:27:36.204199 containerd[1582]: time="2024-12-13T01:27:36.204145937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c6pj,Uid:65a65524-4a1e-4498-bdac-4d939499397d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7bcc142140eb4c70b589a647e722f30185e6b92eb167d4456e02fe19cfee793\"" Dec 13 01:27:36.205565 kubelet[2789]: E1213 01:27:36.205330 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:36.208993 containerd[1582]: time="2024-12-13T01:27:36.208925398Z" level=info msg="CreateContainer within sandbox \"a7bcc142140eb4c70b589a647e722f30185e6b92eb167d4456e02fe19cfee793\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:27:36.439718 containerd[1582]: time="2024-12-13T01:27:36.439533077Z" level=info msg="CreateContainer within sandbox \"a7bcc142140eb4c70b589a647e722f30185e6b92eb167d4456e02fe19cfee793\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00918f79fff8f285b42c84102b2931ab7e430dfeb2b30ecf4ecfe85c735bf368\"" Dec 13 01:27:36.440539 containerd[1582]: time="2024-12-13T01:27:36.440486724Z" level=info msg="StartContainer for \"00918f79fff8f285b42c84102b2931ab7e430dfeb2b30ecf4ecfe85c735bf368\"" Dec 13 01:27:36.445938 containerd[1582]: time="2024-12-13T01:27:36.444927957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:36.445938 containerd[1582]: time="2024-12-13T01:27:36.445601466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:36.445938 containerd[1582]: time="2024-12-13T01:27:36.445620431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.445938 containerd[1582]: time="2024-12-13T01:27:36.445884520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:36.512102 containerd[1582]: time="2024-12-13T01:27:36.511712467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ms7gp,Uid:48c8d02c-31d1-4e33-826d-6c8f6eb89b1f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6c5bfcbd39167f96b5e150b68ea0439dda0262bf5c890eda4592f38f1eb48151\"" Dec 13 01:27:36.515923 containerd[1582]: time="2024-12-13T01:27:36.515279211Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:27:36.527478 containerd[1582]: time="2024-12-13T01:27:36.527422724Z" level=info msg="StartContainer for \"00918f79fff8f285b42c84102b2931ab7e430dfeb2b30ecf4ecfe85c735bf368\" returns successfully" Dec 13 01:27:37.294209 kubelet[2789]: E1213 01:27:37.294178 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:38.296417 kubelet[2789]: E1213 01:27:38.296376 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:38.383530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323538066.mount: Deactivated successfully. Dec 13 01:27:39.120375 containerd[1582]: time="2024-12-13T01:27:39.120293847Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:39.122390 containerd[1582]: time="2024-12-13T01:27:39.121932633Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764285" Dec 13 01:27:39.126381 containerd[1582]: time="2024-12-13T01:27:39.126242521Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:39.133538 containerd[1582]: time="2024-12-13T01:27:39.131400175Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:39.133538 containerd[1582]: time="2024-12-13T01:27:39.132845858Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.617517152s" Dec 13 01:27:39.133538 containerd[1582]: time="2024-12-13T01:27:39.132900651Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:27:39.136356 containerd[1582]: time="2024-12-13T01:27:39.136151784Z" level=info msg="CreateContainer within sandbox \"6c5bfcbd39167f96b5e150b68ea0439dda0262bf5c890eda4592f38f1eb48151\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:27:39.166449 containerd[1582]: time="2024-12-13T01:27:39.166370430Z" level=info msg="CreateContainer within sandbox \"6c5bfcbd39167f96b5e150b68ea0439dda0262bf5c890eda4592f38f1eb48151\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6bf04ea199c6479805cddde2889d4adb41ed18f6bd7103c7426d258e4bb3c4e6\"" Dec 13 01:27:39.167746 containerd[1582]: time="2024-12-13T01:27:39.167681068Z" level=info msg="StartContainer for \"6bf04ea199c6479805cddde2889d4adb41ed18f6bd7103c7426d258e4bb3c4e6\"" Dec 13 01:27:39.242982 containerd[1582]: time="2024-12-13T01:27:39.242885783Z" level=info msg="StartContainer for \"6bf04ea199c6479805cddde2889d4adb41ed18f6bd7103c7426d258e4bb3c4e6\" returns successfully" Dec 13 01:27:39.324999 kubelet[2789]: I1213 01:27:39.324937 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7c6pj" podStartSLOduration=5.324836904 podStartE2EDuration="5.324836904s" podCreationTimestamp="2024-12-13 01:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:37.38731798 +0000 UTC m=+15.244810889" watchObservedRunningTime="2024-12-13 01:27:39.324836904 +0000 UTC m=+17.182329803" Dec 13 01:27:39.325644 kubelet[2789]: I1213 01:27:39.325157 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-ms7gp" podStartSLOduration=1.7051924010000001 podStartE2EDuration="4.325124626s" podCreationTimestamp="2024-12-13 01:27:35 +0000 UTC" firstStartedPulling="2024-12-13 01:27:36.513821183 +0000 UTC m=+14.371314082" lastFinishedPulling="2024-12-13 01:27:39.133753407 +0000 UTC m=+16.991246307" observedRunningTime="2024-12-13 01:27:39.322859029 +0000 UTC m=+17.180351938" watchObservedRunningTime="2024-12-13 01:27:39.325124626 +0000 UTC m=+17.182617545" Dec 13 01:27:42.489361 kubelet[2789]: I1213 01:27:42.489304 2789 topology_manager.go:215] "Topology Admit Handler" podUID="474be745-e970-4cda-9c9b-ecc9527c03ba" podNamespace="calico-system" podName="calico-typha-8d787588f-cjh7p" Dec 13 01:27:42.559524 kubelet[2789]: I1213 01:27:42.559438 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/474be745-e970-4cda-9c9b-ecc9527c03ba-tigera-ca-bundle\") pod \"calico-typha-8d787588f-cjh7p\" (UID: \"474be745-e970-4cda-9c9b-ecc9527c03ba\") " pod="calico-system/calico-typha-8d787588f-cjh7p" Dec 13 01:27:42.559524 kubelet[2789]: I1213 01:27:42.559498 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/474be745-e970-4cda-9c9b-ecc9527c03ba-typha-certs\") pod \"calico-typha-8d787588f-cjh7p\" (UID: \"474be745-e970-4cda-9c9b-ecc9527c03ba\") " pod="calico-system/calico-typha-8d787588f-cjh7p" Dec 13 01:27:42.559524 kubelet[2789]: I1213 01:27:42.559522 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9kw\" (UniqueName: \"kubernetes.io/projected/474be745-e970-4cda-9c9b-ecc9527c03ba-kube-api-access-wc9kw\") pod \"calico-typha-8d787588f-cjh7p\" (UID: \"474be745-e970-4cda-9c9b-ecc9527c03ba\") " pod="calico-system/calico-typha-8d787588f-cjh7p" Dec 13 01:27:42.843145 kubelet[2789]: I1213 01:27:42.843104 2789 topology_manager.go:215] "Topology Admit Handler" podUID="1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3" podNamespace="calico-system" podName="calico-node-589p9" Dec 13 01:27:42.862934 kubelet[2789]: I1213 01:27:42.862788 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-tigera-ca-bundle\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864659 kubelet[2789]: I1213 01:27:42.863381 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-policysync\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864659 kubelet[2789]: I1213 01:27:42.863439 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-cni-bin-dir\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864659 kubelet[2789]: I1213 01:27:42.863469 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-cni-net-dir\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864659 kubelet[2789]: I1213 01:27:42.863506 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-cni-log-dir\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864659 kubelet[2789]: I1213 01:27:42.863548 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-lib-modules\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864952 kubelet[2789]: I1213 01:27:42.863590 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9stv6\" (UniqueName: \"kubernetes.io/projected/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-kube-api-access-9stv6\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864952 kubelet[2789]: I1213 01:27:42.863626 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-node-certs\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864952 kubelet[2789]: I1213 01:27:42.863656 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-xtables-lock\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864952 kubelet[2789]: I1213 01:27:42.863689 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-var-run-calico\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.864952 kubelet[2789]: I1213 01:27:42.863724 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-var-lib-calico\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.865764 kubelet[2789]: I1213 01:27:42.863755 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3-flexvol-driver-host\") pod \"calico-node-589p9\" (UID: \"1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3\") " pod="calico-system/calico-node-589p9" Dec 13 01:27:42.949656 kubelet[2789]: I1213 01:27:42.949602 2789 topology_manager.go:215] "Topology Admit Handler" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" podNamespace="calico-system" podName="csi-node-driver-xkv2k" Dec 13 01:27:42.950031 kubelet[2789]: E1213 01:27:42.949998 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:42.964276 kubelet[2789]: I1213 01:27:42.964181 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57384486-20a7-4c9b-a347-ccc9ae6fe4a9-kubelet-dir\") pod \"csi-node-driver-xkv2k\" (UID: \"57384486-20a7-4c9b-a347-ccc9ae6fe4a9\") " pod="calico-system/csi-node-driver-xkv2k" Dec 13 01:27:42.964276 kubelet[2789]: I1213 01:27:42.964248 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rblk\" (UniqueName: \"kubernetes.io/projected/57384486-20a7-4c9b-a347-ccc9ae6fe4a9-kube-api-access-4rblk\") pod \"csi-node-driver-xkv2k\" (UID: \"57384486-20a7-4c9b-a347-ccc9ae6fe4a9\") " pod="calico-system/csi-node-driver-xkv2k" Dec 13 01:27:42.964655 kubelet[2789]: I1213 01:27:42.964612 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/57384486-20a7-4c9b-a347-ccc9ae6fe4a9-socket-dir\") pod \"csi-node-driver-xkv2k\" (UID: \"57384486-20a7-4c9b-a347-ccc9ae6fe4a9\") " pod="calico-system/csi-node-driver-xkv2k" Dec 13 01:27:42.964719 kubelet[2789]: I1213 01:27:42.964696 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/57384486-20a7-4c9b-a347-ccc9ae6fe4a9-varrun\") pod \"csi-node-driver-xkv2k\" (UID: \"57384486-20a7-4c9b-a347-ccc9ae6fe4a9\") " pod="calico-system/csi-node-driver-xkv2k" Dec 13 01:27:42.964719 kubelet[2789]: I1213 01:27:42.964717 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/57384486-20a7-4c9b-a347-ccc9ae6fe4a9-registration-dir\") pod \"csi-node-driver-xkv2k\" (UID: \"57384486-20a7-4c9b-a347-ccc9ae6fe4a9\") " pod="calico-system/csi-node-driver-xkv2k" Dec 13 01:27:42.976292 kubelet[2789]: E1213 01:27:42.975992 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:42.979406 kubelet[2789]: W1213 01:27:42.979382 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:42.979556 kubelet[2789]: E1213 01:27:42.979539 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:42.980321 kubelet[2789]: E1213 01:27:42.980290 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:42.981472 kubelet[2789]: W1213 01:27:42.981406 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:42.981472 kubelet[2789]: E1213 01:27:42.981440 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:42.987413 kubelet[2789]: E1213 01:27:42.982735 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:42.987413 kubelet[2789]: W1213 01:27:42.982751 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:42.987413 kubelet[2789]: E1213 01:27:42.982767 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.065512 kubelet[2789]: E1213 01:27:43.065476 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.065512 kubelet[2789]: W1213 01:27:43.065513 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.065724 kubelet[2789]: E1213 01:27:43.065542 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.065830 kubelet[2789]: E1213 01:27:43.065812 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.065830 kubelet[2789]: W1213 01:27:43.065826 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.065927 kubelet[2789]: E1213 01:27:43.065862 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.066102 kubelet[2789]: E1213 01:27:43.066087 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.066156 kubelet[2789]: W1213 01:27:43.066099 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.066156 kubelet[2789]: E1213 01:27:43.066127 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.066337 kubelet[2789]: E1213 01:27:43.066323 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.066411 kubelet[2789]: W1213 01:27:43.066334 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.066411 kubelet[2789]: E1213 01:27:43.066383 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.066635 kubelet[2789]: E1213 01:27:43.066618 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.066635 kubelet[2789]: W1213 01:27:43.066635 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.066722 kubelet[2789]: E1213 01:27:43.066654 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.066978 kubelet[2789]: E1213 01:27:43.066962 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.066978 kubelet[2789]: W1213 01:27:43.066975 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.067094 kubelet[2789]: E1213 01:27:43.067018 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.067181 kubelet[2789]: E1213 01:27:43.067168 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.067222 kubelet[2789]: W1213 01:27:43.067182 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.067222 kubelet[2789]: E1213 01:27:43.067212 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.067404 kubelet[2789]: E1213 01:27:43.067386 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.067404 kubelet[2789]: W1213 01:27:43.067398 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.067481 kubelet[2789]: E1213 01:27:43.067462 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.067613 kubelet[2789]: E1213 01:27:43.067597 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.067613 kubelet[2789]: W1213 01:27:43.067611 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.067686 kubelet[2789]: E1213 01:27:43.067643 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.067873 kubelet[2789]: E1213 01:27:43.067857 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.067873 kubelet[2789]: W1213 01:27:43.067871 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.067993 kubelet[2789]: E1213 01:27:43.067976 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.068092 kubelet[2789]: E1213 01:27:43.068078 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.068092 kubelet[2789]: W1213 01:27:43.068088 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.068164 kubelet[2789]: E1213 01:27:43.068105 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.068316 kubelet[2789]: E1213 01:27:43.068302 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.068316 kubelet[2789]: W1213 01:27:43.068314 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.068398 kubelet[2789]: E1213 01:27:43.068332 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.068620 kubelet[2789]: E1213 01:27:43.068607 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.068620 kubelet[2789]: W1213 01:27:43.068618 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.068710 kubelet[2789]: E1213 01:27:43.068679 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.068892 kubelet[2789]: E1213 01:27:43.068859 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.068892 kubelet[2789]: W1213 01:27:43.068870 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.068995 kubelet[2789]: E1213 01:27:43.068907 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.069248 kubelet[2789]: E1213 01:27:43.069234 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.069248 kubelet[2789]: W1213 01:27:43.069245 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.069332 kubelet[2789]: E1213 01:27:43.069305 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.069453 kubelet[2789]: E1213 01:27:43.069422 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.069453 kubelet[2789]: W1213 01:27:43.069436 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.069542 kubelet[2789]: E1213 01:27:43.069495 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.069642 kubelet[2789]: E1213 01:27:43.069626 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.069642 kubelet[2789]: W1213 01:27:43.069638 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.069708 kubelet[2789]: E1213 01:27:43.069663 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.069884 kubelet[2789]: E1213 01:27:43.069869 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.069884 kubelet[2789]: W1213 01:27:43.069882 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.069978 kubelet[2789]: E1213 01:27:43.069900 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.070169 kubelet[2789]: E1213 01:27:43.070155 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.070169 kubelet[2789]: W1213 01:27:43.070166 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.070305 kubelet[2789]: E1213 01:27:43.070207 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.070456 kubelet[2789]: E1213 01:27:43.070442 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.070456 kubelet[2789]: W1213 01:27:43.070455 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.070543 kubelet[2789]: E1213 01:27:43.070476 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.070727 kubelet[2789]: E1213 01:27:43.070714 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.070727 kubelet[2789]: W1213 01:27:43.070726 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.070816 kubelet[2789]: E1213 01:27:43.070758 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.070958 kubelet[2789]: E1213 01:27:43.070942 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.070958 kubelet[2789]: W1213 01:27:43.070957 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.071266 kubelet[2789]: E1213 01:27:43.071038 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.071419 kubelet[2789]: E1213 01:27:43.071364 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.071419 kubelet[2789]: W1213 01:27:43.071416 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.071490 kubelet[2789]: E1213 01:27:43.071436 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.071705 kubelet[2789]: E1213 01:27:43.071691 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.071705 kubelet[2789]: W1213 01:27:43.071703 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.071771 kubelet[2789]: E1213 01:27:43.071732 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.072084 kubelet[2789]: E1213 01:27:43.072045 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.072084 kubelet[2789]: W1213 01:27:43.072081 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.072182 kubelet[2789]: E1213 01:27:43.072096 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.078602 kubelet[2789]: E1213 01:27:43.078575 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:43.078602 kubelet[2789]: W1213 01:27:43.078599 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:43.078717 kubelet[2789]: E1213 01:27:43.078622 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:43.096148 kubelet[2789]: E1213 01:27:43.096036 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:43.096634 containerd[1582]: time="2024-12-13T01:27:43.096587100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d787588f-cjh7p,Uid:474be745-e970-4cda-9c9b-ecc9527c03ba,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:43.130214 containerd[1582]: time="2024-12-13T01:27:43.130123883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:43.130214 containerd[1582]: time="2024-12-13T01:27:43.130173466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:43.130431 containerd[1582]: time="2024-12-13T01:27:43.130183144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:43.130431 containerd[1582]: time="2024-12-13T01:27:43.130276480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:43.148863 kubelet[2789]: E1213 01:27:43.148464 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:43.149450 containerd[1582]: time="2024-12-13T01:27:43.149416334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-589p9,Uid:1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:43.179514 containerd[1582]: time="2024-12-13T01:27:43.179367389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:43.181751 containerd[1582]: time="2024-12-13T01:27:43.179472568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:43.181751 containerd[1582]: time="2024-12-13T01:27:43.179491503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:43.181751 containerd[1582]: time="2024-12-13T01:27:43.179743007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:43.196847 containerd[1582]: time="2024-12-13T01:27:43.196593703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d787588f-cjh7p,Uid:474be745-e970-4cda-9c9b-ecc9527c03ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"51a1113ef57754ea7e8660c0afceed9cde5a43f1b395952eb422b06ec463cdda\"" Dec 13 01:27:43.198555 kubelet[2789]: E1213 01:27:43.198534 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:43.207054 containerd[1582]: time="2024-12-13T01:27:43.206997860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:27:43.227537 containerd[1582]: time="2024-12-13T01:27:43.227357939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-589p9,Uid:1bd83569-a0e8-4f4a-81f1-9a9aad58e7c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"056b8d4503ad6e782e065501887c4bebe685b239b0fd301886ce8b91daf72ad6\"" Dec 13 01:27:43.228424 kubelet[2789]: E1213 01:27:43.228396 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:44.258945 kubelet[2789]: E1213 01:27:44.258883 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:44.720394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783147959.mount: Deactivated successfully. Dec 13 01:27:45.506768 containerd[1582]: time="2024-12-13T01:27:45.506696839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:45.525347 containerd[1582]: time="2024-12-13T01:27:45.525257318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:27:45.535890 containerd[1582]: time="2024-12-13T01:27:45.535842238Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:45.548606 containerd[1582]: time="2024-12-13T01:27:45.548555179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:45.549335 containerd[1582]: time="2024-12-13T01:27:45.549284611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.342234954s" Dec 13 01:27:45.549392 containerd[1582]: time="2024-12-13T01:27:45.549334895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:27:45.550066 containerd[1582]: time="2024-12-13T01:27:45.550019753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:27:45.568909 containerd[1582]: time="2024-12-13T01:27:45.568845141Z" level=info msg="CreateContainer within sandbox \"51a1113ef57754ea7e8660c0afceed9cde5a43f1b395952eb422b06ec463cdda\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:27:45.582708 containerd[1582]: time="2024-12-13T01:27:45.582631311Z" level=info msg="CreateContainer within sandbox \"51a1113ef57754ea7e8660c0afceed9cde5a43f1b395952eb422b06ec463cdda\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b2601190901882b13489ae9332b9168bc3e5b723de24d8193a76d504994b051\"" Dec 13 01:27:45.583378 containerd[1582]: time="2024-12-13T01:27:45.583304747Z" level=info msg="StartContainer for \"2b2601190901882b13489ae9332b9168bc3e5b723de24d8193a76d504994b051\"" Dec 13 01:27:45.659105 containerd[1582]: time="2024-12-13T01:27:45.659059421Z" level=info msg="StartContainer for \"2b2601190901882b13489ae9332b9168bc3e5b723de24d8193a76d504994b051\" returns successfully" Dec 13 01:27:46.260428 kubelet[2789]: E1213 01:27:46.260355 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:46.321978 kubelet[2789]: E1213 01:27:46.321925 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:46.380268 kubelet[2789]: E1213 01:27:46.380218 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.380268 kubelet[2789]: W1213 01:27:46.380258 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.380444 kubelet[2789]: E1213 01:27:46.380291 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.381281 kubelet[2789]: E1213 01:27:46.381257 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.381281 kubelet[2789]: W1213 01:27:46.381278 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.381342 kubelet[2789]: E1213 01:27:46.381294 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.382806 kubelet[2789]: E1213 01:27:46.381542 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.382806 kubelet[2789]: W1213 01:27:46.381557 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.382806 kubelet[2789]: E1213 01:27:46.381573 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.384162 kubelet[2789]: E1213 01:27:46.384113 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.384162 kubelet[2789]: W1213 01:27:46.384140 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.384162 kubelet[2789]: E1213 01:27:46.384164 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.384644 kubelet[2789]: E1213 01:27:46.384623 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.384644 kubelet[2789]: W1213 01:27:46.384640 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.384732 kubelet[2789]: E1213 01:27:46.384659 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.385010 kubelet[2789]: E1213 01:27:46.384983 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.385010 kubelet[2789]: W1213 01:27:46.385000 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.385090 kubelet[2789]: E1213 01:27:46.385015 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.385273 kubelet[2789]: E1213 01:27:46.385254 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.385273 kubelet[2789]: W1213 01:27:46.385270 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.385353 kubelet[2789]: E1213 01:27:46.385286 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.385550 kubelet[2789]: E1213 01:27:46.385531 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.385550 kubelet[2789]: W1213 01:27:46.385547 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.385616 kubelet[2789]: E1213 01:27:46.385562 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.385870 kubelet[2789]: E1213 01:27:46.385851 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.385870 kubelet[2789]: W1213 01:27:46.385867 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.385961 kubelet[2789]: E1213 01:27:46.385882 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.386156 kubelet[2789]: E1213 01:27:46.386123 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.386189 kubelet[2789]: W1213 01:27:46.386154 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.386189 kubelet[2789]: E1213 01:27:46.386169 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.386427 kubelet[2789]: E1213 01:27:46.386406 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.386427 kubelet[2789]: W1213 01:27:46.386422 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.386510 kubelet[2789]: E1213 01:27:46.386439 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.386704 kubelet[2789]: E1213 01:27:46.386686 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.386740 kubelet[2789]: W1213 01:27:46.386705 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.386740 kubelet[2789]: E1213 01:27:46.386721 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.387009 kubelet[2789]: E1213 01:27:46.386989 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.387009 kubelet[2789]: W1213 01:27:46.387005 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.387086 kubelet[2789]: E1213 01:27:46.387021 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.387287 kubelet[2789]: E1213 01:27:46.387268 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.387287 kubelet[2789]: W1213 01:27:46.387284 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.387358 kubelet[2789]: E1213 01:27:46.387299 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.387564 kubelet[2789]: E1213 01:27:46.387544 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.387564 kubelet[2789]: W1213 01:27:46.387562 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.387632 kubelet[2789]: E1213 01:27:46.387578 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.391072 kubelet[2789]: E1213 01:27:46.391034 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.391072 kubelet[2789]: W1213 01:27:46.391063 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.391159 kubelet[2789]: E1213 01:27:46.391094 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.391408 kubelet[2789]: E1213 01:27:46.391380 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.391408 kubelet[2789]: W1213 01:27:46.391395 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.391496 kubelet[2789]: E1213 01:27:46.391416 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.391783 kubelet[2789]: E1213 01:27:46.391752 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.391783 kubelet[2789]: W1213 01:27:46.391777 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.391902 kubelet[2789]: E1213 01:27:46.391818 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.392109 kubelet[2789]: E1213 01:27:46.392082 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.392109 kubelet[2789]: W1213 01:27:46.392095 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.392196 kubelet[2789]: E1213 01:27:46.392120 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.392477 kubelet[2789]: E1213 01:27:46.392449 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.392477 kubelet[2789]: W1213 01:27:46.392464 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.392605 kubelet[2789]: E1213 01:27:46.392564 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.392748 kubelet[2789]: E1213 01:27:46.392715 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.392748 kubelet[2789]: W1213 01:27:46.392725 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.392902 kubelet[2789]: E1213 01:27:46.392761 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.393008 kubelet[2789]: E1213 01:27:46.392985 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.393008 kubelet[2789]: W1213 01:27:46.392999 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.393081 kubelet[2789]: E1213 01:27:46.393057 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.393237 kubelet[2789]: E1213 01:27:46.393216 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.393237 kubelet[2789]: W1213 01:27:46.393230 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.393292 kubelet[2789]: E1213 01:27:46.393250 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.393573 kubelet[2789]: E1213 01:27:46.393556 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.393602 kubelet[2789]: W1213 01:27:46.393573 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.393623 kubelet[2789]: E1213 01:27:46.393601 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.393902 kubelet[2789]: E1213 01:27:46.393885 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.393947 kubelet[2789]: W1213 01:27:46.393901 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.393947 kubelet[2789]: E1213 01:27:46.393924 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.394220 kubelet[2789]: E1213 01:27:46.394204 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.394250 kubelet[2789]: W1213 01:27:46.394219 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.394250 kubelet[2789]: E1213 01:27:46.394240 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.394600 kubelet[2789]: E1213 01:27:46.394568 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.394600 kubelet[2789]: W1213 01:27:46.394587 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.394699 kubelet[2789]: E1213 01:27:46.394614 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.394940 kubelet[2789]: E1213 01:27:46.394914 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.394940 kubelet[2789]: W1213 01:27:46.394941 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.395055 kubelet[2789]: E1213 01:27:46.395030 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.395189 kubelet[2789]: E1213 01:27:46.395173 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.395189 kubelet[2789]: W1213 01:27:46.395188 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.395274 kubelet[2789]: E1213 01:27:46.395228 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.395437 kubelet[2789]: E1213 01:27:46.395422 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.395437 kubelet[2789]: W1213 01:27:46.395435 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.395502 kubelet[2789]: E1213 01:27:46.395451 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.395753 kubelet[2789]: E1213 01:27:46.395730 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.395753 kubelet[2789]: W1213 01:27:46.395742 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.395918 kubelet[2789]: E1213 01:27:46.395760 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.396098 kubelet[2789]: E1213 01:27:46.396078 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.396137 kubelet[2789]: W1213 01:27:46.396096 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.396137 kubelet[2789]: E1213 01:27:46.396132 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:46.396526 kubelet[2789]: E1213 01:27:46.396493 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:46.396526 kubelet[2789]: W1213 01:27:46.396531 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:46.396624 kubelet[2789]: E1213 01:27:46.396565 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.323822 kubelet[2789]: I1213 01:27:47.323730 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:47.324676 kubelet[2789]: E1213 01:27:47.324641 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:47.394003 kubelet[2789]: E1213 01:27:47.393952 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.394003 kubelet[2789]: W1213 01:27:47.393982 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.394003 kubelet[2789]: E1213 01:27:47.394009 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.394319 kubelet[2789]: E1213 01:27:47.394302 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.394360 kubelet[2789]: W1213 01:27:47.394316 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.394360 kubelet[2789]: E1213 01:27:47.394335 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.394675 kubelet[2789]: E1213 01:27:47.394651 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.394675 kubelet[2789]: W1213 01:27:47.394667 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.394734 kubelet[2789]: E1213 01:27:47.394679 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.394961 kubelet[2789]: E1213 01:27:47.394938 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.394961 kubelet[2789]: W1213 01:27:47.394952 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.395034 kubelet[2789]: E1213 01:27:47.394965 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.395239 kubelet[2789]: E1213 01:27:47.395213 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.395239 kubelet[2789]: W1213 01:27:47.395230 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.395302 kubelet[2789]: E1213 01:27:47.395247 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.395479 kubelet[2789]: E1213 01:27:47.395458 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.395479 kubelet[2789]: W1213 01:27:47.395471 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.395519 kubelet[2789]: E1213 01:27:47.395483 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.395704 kubelet[2789]: E1213 01:27:47.395689 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.395704 kubelet[2789]: W1213 01:27:47.395701 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.395775 kubelet[2789]: E1213 01:27:47.395713 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.395976 kubelet[2789]: E1213 01:27:47.395962 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.395976 kubelet[2789]: W1213 01:27:47.395975 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.396027 kubelet[2789]: E1213 01:27:47.395988 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.396246 kubelet[2789]: E1213 01:27:47.396230 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.396290 kubelet[2789]: W1213 01:27:47.396244 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.396290 kubelet[2789]: E1213 01:27:47.396259 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.396475 kubelet[2789]: E1213 01:27:47.396461 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.396475 kubelet[2789]: W1213 01:27:47.396473 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.396550 kubelet[2789]: E1213 01:27:47.396486 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.396705 kubelet[2789]: E1213 01:27:47.396677 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.396705 kubelet[2789]: W1213 01:27:47.396690 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.396705 kubelet[2789]: E1213 01:27:47.396705 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.396945 kubelet[2789]: E1213 01:27:47.396937 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.396972 kubelet[2789]: W1213 01:27:47.396948 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.396972 kubelet[2789]: E1213 01:27:47.396961 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.397169 kubelet[2789]: E1213 01:27:47.397154 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.397169 kubelet[2789]: W1213 01:27:47.397166 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.397224 kubelet[2789]: E1213 01:27:47.397179 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.397405 kubelet[2789]: E1213 01:27:47.397391 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.397405 kubelet[2789]: W1213 01:27:47.397403 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.397480 kubelet[2789]: E1213 01:27:47.397416 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.397613 kubelet[2789]: E1213 01:27:47.397596 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.397613 kubelet[2789]: W1213 01:27:47.397610 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.397695 kubelet[2789]: E1213 01:27:47.397622 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.398915 kubelet[2789]: E1213 01:27:47.398825 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.398915 kubelet[2789]: W1213 01:27:47.398841 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.398915 kubelet[2789]: E1213 01:27:47.398854 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.399106 kubelet[2789]: E1213 01:27:47.399082 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.399106 kubelet[2789]: W1213 01:27:47.399096 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.399161 kubelet[2789]: E1213 01:27:47.399115 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.399366 kubelet[2789]: E1213 01:27:47.399345 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.399366 kubelet[2789]: W1213 01:27:47.399359 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.399432 kubelet[2789]: E1213 01:27:47.399375 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.399711 kubelet[2789]: E1213 01:27:47.399681 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.399763 kubelet[2789]: W1213 01:27:47.399709 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.399763 kubelet[2789]: E1213 01:27:47.399754 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.399996 kubelet[2789]: E1213 01:27:47.399981 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.399996 kubelet[2789]: W1213 01:27:47.399991 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.400052 kubelet[2789]: E1213 01:27:47.400008 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.400224 kubelet[2789]: E1213 01:27:47.400204 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.400224 kubelet[2789]: W1213 01:27:47.400219 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.400308 kubelet[2789]: E1213 01:27:47.400243 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.400509 kubelet[2789]: E1213 01:27:47.400492 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.400509 kubelet[2789]: W1213 01:27:47.400502 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.400593 kubelet[2789]: E1213 01:27:47.400540 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.400724 kubelet[2789]: E1213 01:27:47.400709 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.400724 kubelet[2789]: W1213 01:27:47.400719 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.400814 kubelet[2789]: E1213 01:27:47.400768 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.400961 kubelet[2789]: E1213 01:27:47.400945 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.400961 kubelet[2789]: W1213 01:27:47.400956 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.401016 kubelet[2789]: E1213 01:27:47.400967 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.401182 kubelet[2789]: E1213 01:27:47.401167 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.401182 kubelet[2789]: W1213 01:27:47.401177 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.401248 kubelet[2789]: E1213 01:27:47.401187 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.401386 kubelet[2789]: E1213 01:27:47.401370 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.401386 kubelet[2789]: W1213 01:27:47.401380 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.401455 kubelet[2789]: E1213 01:27:47.401389 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.401620 kubelet[2789]: E1213 01:27:47.401605 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.401620 kubelet[2789]: W1213 01:27:47.401616 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.401693 kubelet[2789]: E1213 01:27:47.401632 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.401917 kubelet[2789]: E1213 01:27:47.401900 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.401943 kubelet[2789]: W1213 01:27:47.401916 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.401943 kubelet[2789]: E1213 01:27:47.401937 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.402154 kubelet[2789]: E1213 01:27:47.402140 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.402154 kubelet[2789]: W1213 01:27:47.402152 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.402214 kubelet[2789]: E1213 01:27:47.402171 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.402418 kubelet[2789]: E1213 01:27:47.402402 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.402418 kubelet[2789]: W1213 01:27:47.402415 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.402469 kubelet[2789]: E1213 01:27:47.402432 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.402787 kubelet[2789]: E1213 01:27:47.402756 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.402787 kubelet[2789]: W1213 01:27:47.402774 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.402903 kubelet[2789]: E1213 01:27:47.402875 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.403119 kubelet[2789]: E1213 01:27:47.403100 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.403119 kubelet[2789]: W1213 01:27:47.403116 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.403174 kubelet[2789]: E1213 01:27:47.403138 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.403360 kubelet[2789]: E1213 01:27:47.403345 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:47.403360 kubelet[2789]: W1213 01:27:47.403358 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:47.403416 kubelet[2789]: E1213 01:27:47.403371 2789 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:47.855353 containerd[1582]: time="2024-12-13T01:27:47.855266156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:47.856743 containerd[1582]: time="2024-12-13T01:27:47.856627455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:27:47.858164 containerd[1582]: time="2024-12-13T01:27:47.858123668Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:47.861978 containerd[1582]: time="2024-12-13T01:27:47.861933371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:47.862593 containerd[1582]: time="2024-12-13T01:27:47.862524423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.312469734s" Dec 13 01:27:47.862593 containerd[1582]: time="2024-12-13T01:27:47.862565620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:27:47.864442 containerd[1582]: time="2024-12-13T01:27:47.864402855Z" level=info msg="CreateContainer within sandbox \"056b8d4503ad6e782e065501887c4bebe685b239b0fd301886ce8b91daf72ad6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:27:47.921164 containerd[1582]: time="2024-12-13T01:27:47.921081020Z" level=info msg="CreateContainer within sandbox \"056b8d4503ad6e782e065501887c4bebe685b239b0fd301886ce8b91daf72ad6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cecc7b06c2a9096a49467cf885b2a636ed7827c68b3cfe50f08a29cdd6c8789d\"" Dec 13 01:27:47.921920 containerd[1582]: time="2024-12-13T01:27:47.921864463Z" level=info msg="StartContainer for \"cecc7b06c2a9096a49467cf885b2a636ed7827c68b3cfe50f08a29cdd6c8789d\"" Dec 13 01:27:47.990850 containerd[1582]: time="2024-12-13T01:27:47.990678246Z" level=info msg="StartContainer for \"cecc7b06c2a9096a49467cf885b2a636ed7827c68b3cfe50f08a29cdd6c8789d\" returns successfully" Dec 13 01:27:48.030239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cecc7b06c2a9096a49467cf885b2a636ed7827c68b3cfe50f08a29cdd6c8789d-rootfs.mount: Deactivated successfully. Dec 13 01:27:48.258898 kubelet[2789]: E1213 01:27:48.258863 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:48.327923 kubelet[2789]: E1213 01:27:48.327875 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:48.446101 kubelet[2789]: I1213 01:27:48.446018 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8d787588f-cjh7p" podStartSLOduration=4.102102852 podStartE2EDuration="6.445400866s" podCreationTimestamp="2024-12-13 01:27:42 +0000 UTC" firstStartedPulling="2024-12-13 01:27:43.206421655 +0000 UTC m=+21.063914554" lastFinishedPulling="2024-12-13 01:27:45.549719669 +0000 UTC m=+23.407212568" observedRunningTime="2024-12-13 01:27:46.373310606 +0000 UTC m=+24.230803515" watchObservedRunningTime="2024-12-13 01:27:48.445400866 +0000 UTC m=+26.302893765" Dec 13 01:27:48.461239 containerd[1582]: time="2024-12-13T01:27:48.461145876Z" level=info msg="shim disconnected" id=cecc7b06c2a9096a49467cf885b2a636ed7827c68b3cfe50f08a29cdd6c8789d namespace=k8s.io Dec 13 01:27:48.461239 containerd[1582]: time="2024-12-13T01:27:48.461230214Z" level=warning msg="cleaning up after shim disconnected" id=cecc7b06c2a9096a49467cf885b2a636ed7827c68b3cfe50f08a29cdd6c8789d namespace=k8s.io Dec 13 01:27:48.461239 containerd[1582]: time="2024-12-13T01:27:48.461363665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:49.331174 kubelet[2789]: E1213 01:27:49.331141 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:49.332079 containerd[1582]: time="2024-12-13T01:27:49.331905052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:27:50.259287 kubelet[2789]: E1213 01:27:50.259248 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:52.041185 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:47906.service - OpenSSH per-connection server daemon (10.0.0.1:47906). Dec 13 01:27:52.077513 sshd[3470]: Accepted publickey for core from 10.0.0.1 port 47906 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:52.079348 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:52.084181 systemd-logind[1557]: New session 8 of user core. Dec 13 01:27:52.090439 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:27:52.258772 kubelet[2789]: E1213 01:27:52.258721 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:52.505567 sshd[3470]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:52.510102 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:47906.service: Deactivated successfully. Dec 13 01:27:52.514865 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:27:52.518130 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:27:52.519749 systemd-logind[1557]: Removed session 8. Dec 13 01:27:53.656561 containerd[1582]: time="2024-12-13T01:27:53.656473568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:53.657242 containerd[1582]: time="2024-12-13T01:27:53.657146422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:27:53.658712 containerd[1582]: time="2024-12-13T01:27:53.658678650Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:53.661331 containerd[1582]: time="2024-12-13T01:27:53.661299143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:53.661944 containerd[1582]: time="2024-12-13T01:27:53.661890515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.329910041s" Dec 13 01:27:53.661944 containerd[1582]: time="2024-12-13T01:27:53.661938835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:27:53.663973 containerd[1582]: time="2024-12-13T01:27:53.663942379Z" level=info msg="CreateContainer within sandbox \"056b8d4503ad6e782e065501887c4bebe685b239b0fd301886ce8b91daf72ad6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:27:53.681110 containerd[1582]: time="2024-12-13T01:27:53.681053385Z" level=info msg="CreateContainer within sandbox \"056b8d4503ad6e782e065501887c4bebe685b239b0fd301886ce8b91daf72ad6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"938c9d207d6e5aeab98659b59350fb8b1ef23ba4de304f5f1ade48718554c737\"" Dec 13 01:27:53.682455 containerd[1582]: time="2024-12-13T01:27:53.681817409Z" level=info msg="StartContainer for \"938c9d207d6e5aeab98659b59350fb8b1ef23ba4de304f5f1ade48718554c737\"" Dec 13 01:27:54.026262 containerd[1582]: time="2024-12-13T01:27:54.026186539Z" level=info msg="StartContainer for \"938c9d207d6e5aeab98659b59350fb8b1ef23ba4de304f5f1ade48718554c737\" returns successfully" Dec 13 01:27:54.259572 kubelet[2789]: E1213 01:27:54.259037 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:54.343618 kubelet[2789]: E1213 01:27:54.343488 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:55.345845 kubelet[2789]: E1213 01:27:55.345785 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:55.498026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-938c9d207d6e5aeab98659b59350fb8b1ef23ba4de304f5f1ade48718554c737-rootfs.mount: Deactivated successfully. Dec 13 01:27:55.500201 containerd[1582]: time="2024-12-13T01:27:55.500123735Z" level=info msg="shim disconnected" id=938c9d207d6e5aeab98659b59350fb8b1ef23ba4de304f5f1ade48718554c737 namespace=k8s.io Dec 13 01:27:55.500201 containerd[1582]: time="2024-12-13T01:27:55.500192033Z" level=warning msg="cleaning up after shim disconnected" id=938c9d207d6e5aeab98659b59350fb8b1ef23ba4de304f5f1ade48718554c737 namespace=k8s.io Dec 13 01:27:55.500201 containerd[1582]: time="2024-12-13T01:27:55.500203715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:55.507856 kubelet[2789]: I1213 01:27:55.507817 2789 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:27:55.528512 kubelet[2789]: I1213 01:27:55.528468 2789 topology_manager.go:215] "Topology Admit Handler" podUID="0f340b06-05bb-4342-b343-8cf6258bf943" podNamespace="kube-system" podName="coredns-76f75df574-k9r28" Dec 13 01:27:55.532036 kubelet[2789]: I1213 01:27:55.530041 2789 topology_manager.go:215] "Topology Admit Handler" podUID="131ce79b-ba75-488a-bc92-8c7dd56c5346" podNamespace="calico-system" podName="calico-kube-controllers-7d844b6d79-p5zcv" Dec 13 01:27:55.532036 kubelet[2789]: I1213 01:27:55.531504 2789 topology_manager.go:215] "Topology Admit Handler" podUID="31b29d1a-8f94-417a-ad9f-c1ad8f55cdff" podNamespace="calico-apiserver" podName="calico-apiserver-6fb7cd8fd-s4c26" Dec 13 01:27:55.534470 kubelet[2789]: I1213 01:27:55.534454 2789 topology_manager.go:215] "Topology Admit Handler" podUID="102b567b-63bd-4f1d-8e44-77806d76c7e6" podNamespace="calico-apiserver" podName="calico-apiserver-6fb7cd8fd-6l2rl" Dec 13 01:27:55.540123 kubelet[2789]: I1213 01:27:55.540087 2789 topology_manager.go:215] "Topology Admit Handler" podUID="17c50d1a-a584-449b-a49a-4f7a961468bb" podNamespace="kube-system" podName="coredns-76f75df574-bw7qj" Dec 13 01:27:55.656466 kubelet[2789]: I1213 01:27:55.656313 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqqj\" (UniqueName: \"kubernetes.io/projected/17c50d1a-a584-449b-a49a-4f7a961468bb-kube-api-access-qdqqj\") pod \"coredns-76f75df574-bw7qj\" (UID: \"17c50d1a-a584-449b-a49a-4f7a961468bb\") " pod="kube-system/coredns-76f75df574-bw7qj" Dec 13 01:27:55.656466 kubelet[2789]: I1213 01:27:55.656359 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8cn\" (UniqueName: \"kubernetes.io/projected/0f340b06-05bb-4342-b343-8cf6258bf943-kube-api-access-fq8cn\") pod \"coredns-76f75df574-k9r28\" (UID: \"0f340b06-05bb-4342-b343-8cf6258bf943\") " pod="kube-system/coredns-76f75df574-k9r28" Dec 13 01:27:55.656466 kubelet[2789]: I1213 01:27:55.656380 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tds46\" (UniqueName: \"kubernetes.io/projected/102b567b-63bd-4f1d-8e44-77806d76c7e6-kube-api-access-tds46\") pod \"calico-apiserver-6fb7cd8fd-6l2rl\" (UID: \"102b567b-63bd-4f1d-8e44-77806d76c7e6\") " pod="calico-apiserver/calico-apiserver-6fb7cd8fd-6l2rl" Dec 13 01:27:55.656671 kubelet[2789]: I1213 01:27:55.656614 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31b29d1a-8f94-417a-ad9f-c1ad8f55cdff-calico-apiserver-certs\") pod \"calico-apiserver-6fb7cd8fd-s4c26\" (UID: \"31b29d1a-8f94-417a-ad9f-c1ad8f55cdff\") " pod="calico-apiserver/calico-apiserver-6fb7cd8fd-s4c26" Dec 13 01:27:55.656742 kubelet[2789]: I1213 01:27:55.656711 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17c50d1a-a584-449b-a49a-4f7a961468bb-config-volume\") pod \"coredns-76f75df574-bw7qj\" (UID: \"17c50d1a-a584-449b-a49a-4f7a961468bb\") " pod="kube-system/coredns-76f75df574-bw7qj" Dec 13 01:27:55.656779 kubelet[2789]: I1213 01:27:55.656765 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n968\" (UniqueName: \"kubernetes.io/projected/131ce79b-ba75-488a-bc92-8c7dd56c5346-kube-api-access-8n968\") pod \"calico-kube-controllers-7d844b6d79-p5zcv\" (UID: \"131ce79b-ba75-488a-bc92-8c7dd56c5346\") " pod="calico-system/calico-kube-controllers-7d844b6d79-p5zcv" Dec 13 01:27:55.657435 kubelet[2789]: I1213 01:27:55.656828 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/102b567b-63bd-4f1d-8e44-77806d76c7e6-calico-apiserver-certs\") pod \"calico-apiserver-6fb7cd8fd-6l2rl\" (UID: \"102b567b-63bd-4f1d-8e44-77806d76c7e6\") " pod="calico-apiserver/calico-apiserver-6fb7cd8fd-6l2rl" Dec 13 01:27:55.657435 kubelet[2789]: I1213 01:27:55.656877 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl8pl\" (UniqueName: \"kubernetes.io/projected/31b29d1a-8f94-417a-ad9f-c1ad8f55cdff-kube-api-access-hl8pl\") pod \"calico-apiserver-6fb7cd8fd-s4c26\" (UID: \"31b29d1a-8f94-417a-ad9f-c1ad8f55cdff\") " pod="calico-apiserver/calico-apiserver-6fb7cd8fd-s4c26" Dec 13 01:27:55.657435 kubelet[2789]: I1213 01:27:55.656985 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/131ce79b-ba75-488a-bc92-8c7dd56c5346-tigera-ca-bundle\") pod \"calico-kube-controllers-7d844b6d79-p5zcv\" (UID: \"131ce79b-ba75-488a-bc92-8c7dd56c5346\") " pod="calico-system/calico-kube-controllers-7d844b6d79-p5zcv" Dec 13 01:27:55.657435 kubelet[2789]: I1213 01:27:55.657125 2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f340b06-05bb-4342-b343-8cf6258bf943-config-volume\") pod \"coredns-76f75df574-k9r28\" (UID: \"0f340b06-05bb-4342-b343-8cf6258bf943\") " pod="kube-system/coredns-76f75df574-k9r28" Dec 13 01:27:55.835456 containerd[1582]: time="2024-12-13T01:27:55.835405475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d844b6d79-p5zcv,Uid:131ce79b-ba75-488a-bc92-8c7dd56c5346,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:55.835615 kubelet[2789]: E1213 01:27:55.835560 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:55.836160 containerd[1582]: time="2024-12-13T01:27:55.835989151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-k9r28,Uid:0f340b06-05bb-4342-b343-8cf6258bf943,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:55.838550 containerd[1582]: time="2024-12-13T01:27:55.838488515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7cd8fd-s4c26,Uid:31b29d1a-8f94-417a-ad9f-c1ad8f55cdff,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:27:55.840566 containerd[1582]: time="2024-12-13T01:27:55.840518588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7cd8fd-6l2rl,Uid:102b567b-63bd-4f1d-8e44-77806d76c7e6,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:27:55.845729 kubelet[2789]: E1213 01:27:55.845695 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:55.846285 containerd[1582]: time="2024-12-13T01:27:55.846078581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bw7qj,Uid:17c50d1a-a584-449b-a49a-4f7a961468bb,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:55.934705 containerd[1582]: time="2024-12-13T01:27:55.934568347Z" level=error msg="Failed to destroy network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:55.935035 containerd[1582]: time="2024-12-13T01:27:55.935007922Z" level=error msg="encountered an error cleaning up failed sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:55.935095 containerd[1582]: time="2024-12-13T01:27:55.935062865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-k9r28,Uid:0f340b06-05bb-4342-b343-8cf6258bf943,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:55.935418 kubelet[2789]: E1213 01:27:55.935389 2789 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:55.935488 kubelet[2789]: E1213 01:27:55.935460 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-k9r28" Dec 13 01:27:55.935488 kubelet[2789]: E1213 01:27:55.935482 2789 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-k9r28" Dec 13 01:27:55.935596 kubelet[2789]: E1213 01:27:55.935552 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-k9r28_kube-system(0f340b06-05bb-4342-b343-8cf6258bf943)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-k9r28_kube-system(0f340b06-05bb-4342-b343-8cf6258bf943)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-k9r28" podUID="0f340b06-05bb-4342-b343-8cf6258bf943" Dec 13 01:27:55.939270 containerd[1582]: time="2024-12-13T01:27:55.939218440Z" level=error msg="Failed to destroy network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:55.939597 containerd[1582]: time="2024-12-13T01:27:55.939536388Z" level=error msg="encountered an error cleaning up failed sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:55.939636 containerd[1582]: time="2024-12-13T01:27:55.939607802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d844b6d79-p5zcv,Uid:131ce79b-ba75-488a-bc92-8c7dd56c5346,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:55.939775 kubelet[2789]: E1213 01:27:55.939757 2789 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:55.939825 kubelet[2789]: E1213 01:27:55.939808 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d844b6d79-p5zcv" Dec 13 01:27:55.939852 kubelet[2789]: E1213 01:27:55.939830 2789 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d844b6d79-p5zcv" Dec 13 01:27:55.939892 kubelet[2789]: E1213 01:27:55.939880 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d844b6d79-p5zcv_calico-system(131ce79b-ba75-488a-bc92-8c7dd56c5346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d844b6d79-p5zcv_calico-system(131ce79b-ba75-488a-bc92-8c7dd56c5346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d844b6d79-p5zcv" podUID="131ce79b-ba75-488a-bc92-8c7dd56c5346" Dec 13 01:27:56.210085 containerd[1582]: time="2024-12-13T01:27:56.209964249Z" level=error msg="Failed to destroy network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.210423 containerd[1582]: time="2024-12-13T01:27:56.210364921Z" level=error msg="encountered an error cleaning up failed sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.210484 containerd[1582]: time="2024-12-13T01:27:56.210441084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7cd8fd-s4c26,Uid:31b29d1a-8f94-417a-ad9f-c1ad8f55cdff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.211065 kubelet[2789]: E1213 01:27:56.210981 2789 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.211065 kubelet[2789]: E1213 01:27:56.211044 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-s4c26" Dec 13 01:27:56.211065 kubelet[2789]: E1213 01:27:56.211066 2789 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-s4c26" Dec 13 01:27:56.211193 kubelet[2789]: E1213 01:27:56.211123 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb7cd8fd-s4c26_calico-apiserver(31b29d1a-8f94-417a-ad9f-c1ad8f55cdff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb7cd8fd-s4c26_calico-apiserver(31b29d1a-8f94-417a-ad9f-c1ad8f55cdff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-s4c26" podUID="31b29d1a-8f94-417a-ad9f-c1ad8f55cdff" Dec 13 01:27:56.212761 containerd[1582]: time="2024-12-13T01:27:56.212704525Z" level=error msg="Failed to destroy network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.213230 containerd[1582]: time="2024-12-13T01:27:56.213196619Z" level=error msg="encountered an error cleaning up failed sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.213314 containerd[1582]: time="2024-12-13T01:27:56.213285246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7cd8fd-6l2rl,Uid:102b567b-63bd-4f1d-8e44-77806d76c7e6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.213508 kubelet[2789]: E1213 01:27:56.213493 2789 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.213570 kubelet[2789]: E1213 01:27:56.213526 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-6l2rl" Dec 13 01:27:56.213570 kubelet[2789]: E1213 01:27:56.213552 2789 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-6l2rl" Dec 13 01:27:56.213613 kubelet[2789]: E1213 01:27:56.213601 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb7cd8fd-6l2rl_calico-apiserver(102b567b-63bd-4f1d-8e44-77806d76c7e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb7cd8fd-6l2rl_calico-apiserver(102b567b-63bd-4f1d-8e44-77806d76c7e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-6l2rl" podUID="102b567b-63bd-4f1d-8e44-77806d76c7e6" Dec 13 01:27:56.217902 containerd[1582]: time="2024-12-13T01:27:56.217869335Z" level=error msg="Failed to destroy network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.218220 containerd[1582]: time="2024-12-13T01:27:56.218193544Z" level=error msg="encountered an error cleaning up failed sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.218259 containerd[1582]: time="2024-12-13T01:27:56.218239650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bw7qj,Uid:17c50d1a-a584-449b-a49a-4f7a961468bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.218494 kubelet[2789]: E1213 01:27:56.218462 2789 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.218555 kubelet[2789]: E1213 01:27:56.218524 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bw7qj" Dec 13 01:27:56.218592 kubelet[2789]: E1213 01:27:56.218562 2789 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bw7qj" Dec 13 01:27:56.218643 kubelet[2789]: E1213 01:27:56.218629 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bw7qj_kube-system(17c50d1a-a584-449b-a49a-4f7a961468bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bw7qj_kube-system(17c50d1a-a584-449b-a49a-4f7a961468bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bw7qj" podUID="17c50d1a-a584-449b-a49a-4f7a961468bb" Dec 13 01:27:56.261816 containerd[1582]: time="2024-12-13T01:27:56.261734617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xkv2k,Uid:57384486-20a7-4c9b-a347-ccc9ae6fe4a9,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:56.326768 containerd[1582]: time="2024-12-13T01:27:56.326708742Z" level=error msg="Failed to destroy network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.327171 containerd[1582]: time="2024-12-13T01:27:56.327133840Z" level=error msg="encountered an error cleaning up failed sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.327228 containerd[1582]: time="2024-12-13T01:27:56.327207558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xkv2k,Uid:57384486-20a7-4c9b-a347-ccc9ae6fe4a9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.327494 kubelet[2789]: E1213 01:27:56.327467 2789 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.327573 kubelet[2789]: E1213 01:27:56.327545 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xkv2k" Dec 13 01:27:56.327611 kubelet[2789]: E1213 01:27:56.327579 2789 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xkv2k" Dec 13 01:27:56.327672 kubelet[2789]: E1213 01:27:56.327656 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xkv2k_calico-system(57384486-20a7-4c9b-a347-ccc9ae6fe4a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xkv2k_calico-system(57384486-20a7-4c9b-a347-ccc9ae6fe4a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:56.349218 kubelet[2789]: E1213 01:27:56.349196 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:56.350228 containerd[1582]: time="2024-12-13T01:27:56.350183468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:27:56.350780 kubelet[2789]: I1213 01:27:56.350739 2789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:27:56.351872 containerd[1582]: time="2024-12-13T01:27:56.351403641Z" level=info msg="StopPodSandbox for \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\"" Dec 13 01:27:56.351872 containerd[1582]: time="2024-12-13T01:27:56.351572377Z" level=info msg="Ensure that sandbox 706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438 in task-service has been cleanup successfully" Dec 13 01:27:56.353739 kubelet[2789]: I1213 01:27:56.353702 2789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:27:56.354839 containerd[1582]: time="2024-12-13T01:27:56.354771455Z" level=info msg="StopPodSandbox for \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\"" Dec 13 01:27:56.356490 containerd[1582]: time="2024-12-13T01:27:56.355196634Z" level=info msg="Ensure that sandbox e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861 in task-service has been cleanup successfully" Dec 13 01:27:56.357810 kubelet[2789]: I1213 01:27:56.356699 2789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:27:56.357894 containerd[1582]: time="2024-12-13T01:27:56.357860817Z" level=info msg="StopPodSandbox for \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\"" Dec 13 01:27:56.358158 containerd[1582]: time="2024-12-13T01:27:56.358137277Z" level=info msg="Ensure that sandbox 3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496 in task-service has been cleanup successfully" Dec 13 01:27:56.358274 kubelet[2789]: I1213 01:27:56.358244 2789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:27:56.359347 containerd[1582]: time="2024-12-13T01:27:56.359135130Z" level=info msg="StopPodSandbox for \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\"" Dec 13 01:27:56.362064 containerd[1582]: time="2024-12-13T01:27:56.362009438Z" level=info msg="Ensure that sandbox b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145 in task-service has been cleanup successfully" Dec 13 01:27:56.363375 kubelet[2789]: I1213 01:27:56.362964 2789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:27:56.364097 containerd[1582]: time="2024-12-13T01:27:56.363822974Z" level=info msg="StopPodSandbox for \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\"" Dec 13 01:27:56.364604 containerd[1582]: time="2024-12-13T01:27:56.364579255Z" level=info msg="Ensure that sandbox 75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57 in task-service has been cleanup successfully" Dec 13 01:27:56.364918 kubelet[2789]: I1213 01:27:56.364902 2789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:27:56.365753 containerd[1582]: time="2024-12-13T01:27:56.365357005Z" level=info msg="StopPodSandbox for \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\"" Dec 13 01:27:56.365753 containerd[1582]: time="2024-12-13T01:27:56.365510414Z" level=info msg="Ensure that sandbox 9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9 in task-service has been cleanup successfully" Dec 13 01:27:56.409353 containerd[1582]: time="2024-12-13T01:27:56.409295574Z" level=error msg="StopPodSandbox for \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\" failed" error="failed to destroy network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.409865 kubelet[2789]: E1213 01:27:56.409768 2789 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:27:56.410176 kubelet[2789]: E1213 01:27:56.410050 2789 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861"} Dec 13 01:27:56.410176 kubelet[2789]: E1213 01:27:56.410108 2789 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17c50d1a-a584-449b-a49a-4f7a961468bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:56.410176 kubelet[2789]: E1213 01:27:56.410149 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17c50d1a-a584-449b-a49a-4f7a961468bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bw7qj" podUID="17c50d1a-a584-449b-a49a-4f7a961468bb" Dec 13 01:27:56.421058 containerd[1582]: time="2024-12-13T01:27:56.420980660Z" level=error msg="StopPodSandbox for \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\" failed" error="failed to destroy network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.421273 containerd[1582]: time="2024-12-13T01:27:56.420982373Z" level=error msg="StopPodSandbox for \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\" failed" error="failed to destroy network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.421875 kubelet[2789]: E1213 01:27:56.421383 2789 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:27:56.421875 kubelet[2789]: E1213 01:27:56.421446 2789 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438"} Dec 13 01:27:56.421875 kubelet[2789]: E1213 01:27:56.421495 2789 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57384486-20a7-4c9b-a347-ccc9ae6fe4a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:56.421875 kubelet[2789]: E1213 01:27:56.421545 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57384486-20a7-4c9b-a347-ccc9ae6fe4a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xkv2k" podUID="57384486-20a7-4c9b-a347-ccc9ae6fe4a9" Dec 13 01:27:56.422049 kubelet[2789]: E1213 01:27:56.421590 2789 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:27:56.422049 kubelet[2789]: E1213 01:27:56.421606 2789 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496"} Dec 13 01:27:56.422049 kubelet[2789]: E1213 01:27:56.421640 2789 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"131ce79b-ba75-488a-bc92-8c7dd56c5346\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:56.422049 kubelet[2789]: E1213 01:27:56.421670 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"131ce79b-ba75-488a-bc92-8c7dd56c5346\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d844b6d79-p5zcv" podUID="131ce79b-ba75-488a-bc92-8c7dd56c5346" Dec 13 01:27:56.428380 containerd[1582]: time="2024-12-13T01:27:56.428240775Z" level=error msg="StopPodSandbox for \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\" failed" error="failed to destroy network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.428554 kubelet[2789]: E1213 01:27:56.428512 2789 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:27:56.428604 kubelet[2789]: E1213 01:27:56.428583 2789 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9"} Dec 13 01:27:56.428639 kubelet[2789]: E1213 01:27:56.428617 2789 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f340b06-05bb-4342-b343-8cf6258bf943\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:56.428709 kubelet[2789]: E1213 01:27:56.428653 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f340b06-05bb-4342-b343-8cf6258bf943\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-k9r28" podUID="0f340b06-05bb-4342-b343-8cf6258bf943" Dec 13 01:27:56.428892 containerd[1582]: time="2024-12-13T01:27:56.428846763Z" level=error msg="StopPodSandbox for \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\" failed" error="failed to destroy network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.429065 kubelet[2789]: E1213 01:27:56.429038 2789 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:27:56.429106 kubelet[2789]: E1213 01:27:56.429069 2789 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57"} Dec 13 01:27:56.429106 kubelet[2789]: E1213 01:27:56.429097 2789 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31b29d1a-8f94-417a-ad9f-c1ad8f55cdff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:56.429202 kubelet[2789]: E1213 01:27:56.429118 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31b29d1a-8f94-417a-ad9f-c1ad8f55cdff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-s4c26" podUID="31b29d1a-8f94-417a-ad9f-c1ad8f55cdff" Dec 13 01:27:56.430352 containerd[1582]: time="2024-12-13T01:27:56.430313537Z" level=error msg="StopPodSandbox for \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\" failed" error="failed to destroy network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:56.430481 kubelet[2789]: E1213 01:27:56.430461 2789 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:27:56.430540 kubelet[2789]: E1213 01:27:56.430484 2789 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145"} Dec 13 01:27:56.430540 kubelet[2789]: E1213 01:27:56.430509 2789 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"102b567b-63bd-4f1d-8e44-77806d76c7e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:56.430540 kubelet[2789]: E1213 01:27:56.430540 2789 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"102b567b-63bd-4f1d-8e44-77806d76c7e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-6l2rl" podUID="102b567b-63bd-4f1d-8e44-77806d76c7e6" Dec 13 01:27:56.499142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496-shm.mount: Deactivated successfully. Dec 13 01:27:57.516065 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:33050.service - OpenSSH per-connection server daemon (10.0.0.1:33050). Dec 13 01:27:57.552280 sshd[3915]: Accepted publickey for core from 10.0.0.1 port 33050 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:57.554094 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:57.558670 systemd-logind[1557]: New session 9 of user core. Dec 13 01:27:57.566068 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:27:57.793563 sshd[3915]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:57.798434 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:33050.service: Deactivated successfully. Dec 13 01:27:57.801161 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:27:57.801232 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:27:57.802585 systemd-logind[1557]: Removed session 9. Dec 13 01:28:01.132579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360470893.mount: Deactivated successfully. Dec 13 01:28:02.778327 containerd[1582]: time="2024-12-13T01:28:02.778190707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:02.779587 containerd[1582]: time="2024-12-13T01:28:02.779564927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:28:02.780770 containerd[1582]: time="2024-12-13T01:28:02.780732829Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:02.782997 containerd[1582]: time="2024-12-13T01:28:02.782968827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:02.783604 containerd[1582]: time="2024-12-13T01:28:02.783566379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.433339919s" Dec 13 01:28:02.783647 containerd[1582]: time="2024-12-13T01:28:02.783601845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:28:02.791460 containerd[1582]: time="2024-12-13T01:28:02.791425371Z" level=info msg="CreateContainer within sandbox \"056b8d4503ad6e782e065501887c4bebe685b239b0fd301886ce8b91daf72ad6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:28:02.805023 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:33058.service - OpenSSH per-connection server daemon (10.0.0.1:33058). Dec 13 01:28:02.815044 containerd[1582]: time="2024-12-13T01:28:02.815005625Z" level=info msg="CreateContainer within sandbox \"056b8d4503ad6e782e065501887c4bebe685b239b0fd301886ce8b91daf72ad6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"57f7f79b1e4534d5a6161414494180b01174cf8ae275277d98205df254c2f630\"" Dec 13 01:28:02.815564 containerd[1582]: time="2024-12-13T01:28:02.815541793Z" level=info msg="StartContainer for \"57f7f79b1e4534d5a6161414494180b01174cf8ae275277d98205df254c2f630\"" Dec 13 01:28:02.843055 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 33058 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:02.844883 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:02.851231 systemd-logind[1557]: New session 10 of user core. Dec 13 01:28:02.856068 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:28:03.045044 sshd[3937]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:03.050644 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:33058.service: Deactivated successfully. Dec 13 01:28:03.054905 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:28:03.055091 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:28:03.057404 systemd-logind[1557]: Removed session 10. Dec 13 01:28:03.071332 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:28:03.071902 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:28:03.205562 containerd[1582]: time="2024-12-13T01:28:03.205502650Z" level=info msg="StartContainer for \"57f7f79b1e4534d5a6161414494180b01174cf8ae275277d98205df254c2f630\" returns successfully" Dec 13 01:28:03.389828 kubelet[2789]: E1213 01:28:03.389682 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:04.391165 kubelet[2789]: E1213 01:28:04.391134 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:07.259549 containerd[1582]: time="2024-12-13T01:28:07.259462999Z" level=info msg="StopPodSandbox for \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\"" Dec 13 01:28:07.260363 containerd[1582]: time="2024-12-13T01:28:07.259552638Z" level=info msg="StopPodSandbox for \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\"" Dec 13 01:28:07.314129 kubelet[2789]: I1213 01:28:07.314052 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-589p9" podStartSLOduration=5.75927927 podStartE2EDuration="25.314004277s" podCreationTimestamp="2024-12-13 01:27:42 +0000 UTC" firstStartedPulling="2024-12-13 01:27:43.229034514 +0000 UTC m=+21.086527413" lastFinishedPulling="2024-12-13 01:28:02.783759521 +0000 UTC m=+40.641252420" observedRunningTime="2024-12-13 01:28:03.623495636 +0000 UTC m=+41.480988535" watchObservedRunningTime="2024-12-13 01:28:07.314004277 +0000 UTC m=+45.171497176" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.315 [INFO][4218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.315 [INFO][4218] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" iface="eth0" netns="/var/run/netns/cni-26d9bf82-833a-8092-cb7a-eae107bc20d8" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.316 [INFO][4218] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" iface="eth0" netns="/var/run/netns/cni-26d9bf82-833a-8092-cb7a-eae107bc20d8" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.317 [INFO][4218] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" iface="eth0" netns="/var/run/netns/cni-26d9bf82-833a-8092-cb7a-eae107bc20d8" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.317 [INFO][4218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.317 [INFO][4218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.373 [INFO][4235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.374 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.374 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.380 [WARNING][4235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.380 [INFO][4235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.381 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:07.385524 containerd[1582]: 2024-12-13 01:28:07.383 [INFO][4218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:07.386189 containerd[1582]: time="2024-12-13T01:28:07.385698244Z" level=info msg="TearDown network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\" successfully" Dec 13 01:28:07.386189 containerd[1582]: time="2024-12-13T01:28:07.385726207Z" level=info msg="StopPodSandbox for \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\" returns successfully" Dec 13 01:28:07.386264 kubelet[2789]: E1213 01:28:07.386175 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:07.386995 containerd[1582]: time="2024-12-13T01:28:07.386969579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-k9r28,Uid:0f340b06-05bb-4342-b343-8cf6258bf943,Namespace:kube-system,Attempt:1,}" Dec 13 01:28:07.389383 systemd[1]: run-netns-cni\x2d26d9bf82\x2d833a\x2d8092\x2dcb7a\x2deae107bc20d8.mount: Deactivated successfully. Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.313 [INFO][4219] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.314 [INFO][4219] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" iface="eth0" netns="/var/run/netns/cni-dbb1698a-b66f-7de0-cb16-6b16230d6492" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.314 [INFO][4219] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" iface="eth0" netns="/var/run/netns/cni-dbb1698a-b66f-7de0-cb16-6b16230d6492" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.316 [INFO][4219] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" iface="eth0" netns="/var/run/netns/cni-dbb1698a-b66f-7de0-cb16-6b16230d6492" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.316 [INFO][4219] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.316 [INFO][4219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.373 [INFO][4234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.374 [INFO][4234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.381 [INFO][4234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.391 [WARNING][4234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.392 [INFO][4234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.393 [INFO][4234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:07.397996 containerd[1582]: 2024-12-13 01:28:07.395 [INFO][4219] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:07.398395 containerd[1582]: time="2024-12-13T01:28:07.398155031Z" level=info msg="TearDown network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\" successfully" Dec 13 01:28:07.398395 containerd[1582]: time="2024-12-13T01:28:07.398182693Z" level=info msg="StopPodSandbox for \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\" returns successfully" Dec 13 01:28:07.398753 containerd[1582]: time="2024-12-13T01:28:07.398718920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xkv2k,Uid:57384486-20a7-4c9b-a347-ccc9ae6fe4a9,Namespace:calico-system,Attempt:1,}" Dec 13 01:28:07.400878 systemd[1]: run-netns-cni\x2ddbb1698a\x2db66f\x2d7de0\x2dcb16\x2d6b16230d6492.mount: Deactivated successfully. Dec 13 01:28:07.578162 systemd-networkd[1242]: cali1b30d2846a5: Link UP Dec 13 01:28:07.578471 systemd-networkd[1242]: cali1b30d2846a5: Gained carrier Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.495 [INFO][4249] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.508 [INFO][4249] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--k9r28-eth0 coredns-76f75df574- kube-system 0f340b06-05bb-4342-b343-8cf6258bf943 900 0 2024-12-13 01:27:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-k9r28 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1b30d2846a5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Namespace="kube-system" Pod="coredns-76f75df574-k9r28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--k9r28-" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.508 [INFO][4249] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Namespace="kube-system" Pod="coredns-76f75df574-k9r28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.537 [INFO][4275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" HandleID="k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.546 [INFO][4275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" HandleID="k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7070), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-k9r28", "timestamp":"2024-12-13 01:28:07.537432472 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.546 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.546 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.546 [INFO][4275] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.548 [INFO][4275] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.552 [INFO][4275] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.555 [INFO][4275] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.556 [INFO][4275] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.558 [INFO][4275] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.558 [INFO][4275] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.559 [INFO][4275] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646 Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.563 [INFO][4275] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.567 [INFO][4275] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.567 [INFO][4275] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" host="localhost" Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.567 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:07.591555 containerd[1582]: 2024-12-13 01:28:07.567 [INFO][4275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" HandleID="k8s-pod-network.12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.592276 containerd[1582]: 2024-12-13 01:28:07.569 [INFO][4249] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Namespace="kube-system" Pod="coredns-76f75df574-k9r28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--k9r28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--k9r28-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f340b06-05bb-4342-b343-8cf6258bf943", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-k9r28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b30d2846a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:07.592276 containerd[1582]: 2024-12-13 01:28:07.569 [INFO][4249] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Namespace="kube-system" Pod="coredns-76f75df574-k9r28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.592276 containerd[1582]: 2024-12-13 01:28:07.569 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b30d2846a5 ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Namespace="kube-system" Pod="coredns-76f75df574-k9r28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.592276 containerd[1582]: 2024-12-13 01:28:07.578 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Namespace="kube-system" Pod="coredns-76f75df574-k9r28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.592276 containerd[1582]: 2024-12-13 01:28:07.580 [INFO][4249] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Namespace="kube-system" Pod="coredns-76f75df574-k9r28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--k9r28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--k9r28-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f340b06-05bb-4342-b343-8cf6258bf943", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646", Pod:"coredns-76f75df574-k9r28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b30d2846a5", MAC:"4e:d0:51:a5:0f:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:07.592276 containerd[1582]: 2024-12-13 01:28:07.586 [INFO][4249] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646" Namespace="kube-system" Pod="coredns-76f75df574-k9r28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:07.605229 systemd-networkd[1242]: cali6086cc40e8e: Link UP Dec 13 01:28:07.606400 systemd-networkd[1242]: cali6086cc40e8e: Gained carrier Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.499 [INFO][4255] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.508 [INFO][4255] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xkv2k-eth0 csi-node-driver- calico-system 57384486-20a7-4c9b-a347-ccc9ae6fe4a9 899 0 2024-12-13 01:27:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xkv2k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6086cc40e8e [] []}} ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Namespace="calico-system" Pod="csi-node-driver-xkv2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkv2k-" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.508 [INFO][4255] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Namespace="calico-system" Pod="csi-node-driver-xkv2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.538 [INFO][4274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" HandleID="k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.548 [INFO][4274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" HandleID="k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xkv2k", "timestamp":"2024-12-13 01:28:07.538598499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.548 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.567 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.567 [INFO][4274] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.570 [INFO][4274] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.573 [INFO][4274] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.577 [INFO][4274] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.579 [INFO][4274] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.584 [INFO][4274] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.584 [INFO][4274] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.588 [INFO][4274] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82 Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.593 [INFO][4274] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.598 [INFO][4274] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.598 [INFO][4274] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" host="localhost" Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.598 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:07.619392 containerd[1582]: 2024-12-13 01:28:07.598 [INFO][4274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" HandleID="k8s-pod-network.432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.620017 containerd[1582]: 2024-12-13 01:28:07.601 [INFO][4255] cni-plugin/k8s.go 386: Populated endpoint ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Namespace="calico-system" Pod="csi-node-driver-xkv2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkv2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xkv2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57384486-20a7-4c9b-a347-ccc9ae6fe4a9", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xkv2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6086cc40e8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:07.620017 containerd[1582]: 2024-12-13 01:28:07.602 [INFO][4255] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Namespace="calico-system" Pod="csi-node-driver-xkv2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.620017 containerd[1582]: 2024-12-13 01:28:07.602 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6086cc40e8e ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Namespace="calico-system" Pod="csi-node-driver-xkv2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.620017 containerd[1582]: 2024-12-13 01:28:07.605 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Namespace="calico-system" Pod="csi-node-driver-xkv2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.620017 containerd[1582]: 2024-12-13 01:28:07.605 [INFO][4255] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Namespace="calico-system" Pod="csi-node-driver-xkv2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkv2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xkv2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57384486-20a7-4c9b-a347-ccc9ae6fe4a9", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82", Pod:"csi-node-driver-xkv2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6086cc40e8e", MAC:"26:a7:ba:bc:b7:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:07.620017 containerd[1582]: 2024-12-13 01:28:07.616 [INFO][4255] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82" Namespace="calico-system" Pod="csi-node-driver-xkv2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:07.628200 containerd[1582]: time="2024-12-13T01:28:07.627484311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:07.628200 containerd[1582]: time="2024-12-13T01:28:07.627572727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:07.628200 containerd[1582]: time="2024-12-13T01:28:07.627585701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:07.628200 containerd[1582]: time="2024-12-13T01:28:07.627805423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:07.641621 containerd[1582]: time="2024-12-13T01:28:07.641521143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:07.641621 containerd[1582]: time="2024-12-13T01:28:07.641588189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:07.641621 containerd[1582]: time="2024-12-13T01:28:07.641602586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:07.641809 containerd[1582]: time="2024-12-13T01:28:07.641716579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:07.654485 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:28:07.670218 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:28:07.686733 containerd[1582]: time="2024-12-13T01:28:07.686687896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-k9r28,Uid:0f340b06-05bb-4342-b343-8cf6258bf943,Namespace:kube-system,Attempt:1,} returns sandbox id \"12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646\"" Dec 13 01:28:07.687981 kubelet[2789]: E1213 01:28:07.687297 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:07.688058 containerd[1582]: time="2024-12-13T01:28:07.687910311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xkv2k,Uid:57384486-20a7-4c9b-a347-ccc9ae6fe4a9,Namespace:calico-system,Attempt:1,} returns sandbox id \"432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82\"" Dec 13 01:28:07.690635 containerd[1582]: time="2024-12-13T01:28:07.690606141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:28:07.691087 containerd[1582]: time="2024-12-13T01:28:07.691053640Z" level=info msg="CreateContainer within sandbox \"12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:07.728677 containerd[1582]: time="2024-12-13T01:28:07.728624118Z" level=info msg="CreateContainer within sandbox \"12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2f5d91f9d87427d0e9d95b954f080acd0e582a3c886bf972a009dd9dc5d4ed2\"" Dec 13 01:28:07.729849 containerd[1582]: time="2024-12-13T01:28:07.729200159Z" level=info msg="StartContainer for \"a2f5d91f9d87427d0e9d95b954f080acd0e582a3c886bf972a009dd9dc5d4ed2\"" Dec 13 01:28:07.793031 containerd[1582]: time="2024-12-13T01:28:07.792980244Z" level=info msg="StartContainer for \"a2f5d91f9d87427d0e9d95b954f080acd0e582a3c886bf972a009dd9dc5d4ed2\" returns successfully" Dec 13 01:28:08.053041 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:56612.service - OpenSSH per-connection server daemon (10.0.0.1:56612). Dec 13 01:28:08.092120 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 56612 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:08.094438 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:08.099023 systemd-logind[1557]: New session 11 of user core. Dec 13 01:28:08.107162 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:28:08.236698 sshd[4457]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:08.243095 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:56624.service - OpenSSH per-connection server daemon (10.0.0.1:56624). Dec 13 01:28:08.243737 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:56612.service: Deactivated successfully. Dec 13 01:28:08.247391 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:28:08.249485 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:28:08.250503 systemd-logind[1557]: Removed session 11. Dec 13 01:28:08.281060 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 56624 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:08.282538 sshd[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:08.286779 systemd-logind[1557]: New session 12 of user core. Dec 13 01:28:08.296141 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:28:08.412176 kubelet[2789]: E1213 01:28:08.412053 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:08.430783 kubelet[2789]: I1213 01:28:08.430719 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-k9r28" podStartSLOduration=33.430670564 podStartE2EDuration="33.430670564s" podCreationTimestamp="2024-12-13 01:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:08.43007698 +0000 UTC m=+46.287569889" watchObservedRunningTime="2024-12-13 01:28:08.430670564 +0000 UTC m=+46.288163463" Dec 13 01:28:08.450872 sshd[4470]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:08.462079 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:56638.service - OpenSSH per-connection server daemon (10.0.0.1:56638). Dec 13 01:28:08.462638 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:56624.service: Deactivated successfully. Dec 13 01:28:08.468451 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:28:08.471565 systemd-logind[1557]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:28:08.474870 systemd-logind[1557]: Removed session 12. Dec 13 01:28:08.505043 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 56638 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:08.506748 sshd[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:08.511051 systemd-logind[1557]: New session 13 of user core. Dec 13 01:28:08.516091 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:28:08.632400 sshd[4483]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:08.636914 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:56638.service: Deactivated successfully. Dec 13 01:28:08.639380 systemd-logind[1557]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:28:08.639575 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:28:08.640707 systemd-logind[1557]: Removed session 13. Dec 13 01:28:08.659973 systemd-networkd[1242]: cali6086cc40e8e: Gained IPv6LL Dec 13 01:28:09.113464 containerd[1582]: time="2024-12-13T01:28:09.113385489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:09.117818 containerd[1582]: time="2024-12-13T01:28:09.114449756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:28:09.117818 containerd[1582]: time="2024-12-13T01:28:09.116823471Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:09.121054 containerd[1582]: time="2024-12-13T01:28:09.121004156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:09.121748 containerd[1582]: time="2024-12-13T01:28:09.121684292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.431044467s" Dec 13 01:28:09.121748 containerd[1582]: time="2024-12-13T01:28:09.121720560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:28:09.124049 containerd[1582]: time="2024-12-13T01:28:09.123995168Z" level=info msg="CreateContainer within sandbox \"432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:28:09.142748 containerd[1582]: time="2024-12-13T01:28:09.142687124Z" level=info msg="CreateContainer within sandbox \"432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0d49f5c465bbae133cb834e230f8e1df18eead82bd130f08a667283d7ba89c68\"" Dec 13 01:28:09.143313 containerd[1582]: time="2024-12-13T01:28:09.143273685Z" level=info msg="StartContainer for \"0d49f5c465bbae133cb834e230f8e1df18eead82bd130f08a667283d7ba89c68\"" Dec 13 01:28:09.231298 containerd[1582]: time="2024-12-13T01:28:09.231235067Z" level=info msg="StartContainer for \"0d49f5c465bbae133cb834e230f8e1df18eead82bd130f08a667283d7ba89c68\" returns successfully" Dec 13 01:28:09.233044 containerd[1582]: time="2024-12-13T01:28:09.233008355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:28:09.260228 containerd[1582]: time="2024-12-13T01:28:09.260169083Z" level=info msg="StopPodSandbox for \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\"" Dec 13 01:28:09.260568 containerd[1582]: time="2024-12-13T01:28:09.260485636Z" level=info msg="StopPodSandbox for \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\"" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.314 [INFO][4596] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.314 [INFO][4596] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" iface="eth0" netns="/var/run/netns/cni-47f3c9ca-9e56-5fe4-cd76-2dc27cb286a2" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.314 [INFO][4596] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" iface="eth0" netns="/var/run/netns/cni-47f3c9ca-9e56-5fe4-cd76-2dc27cb286a2" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.315 [INFO][4596] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" iface="eth0" netns="/var/run/netns/cni-47f3c9ca-9e56-5fe4-cd76-2dc27cb286a2" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.315 [INFO][4596] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.315 [INFO][4596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.337 [INFO][4616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.337 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.337 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.342 [WARNING][4616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.342 [INFO][4616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.344 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:09.348217 containerd[1582]: 2024-12-13 01:28:09.346 [INFO][4596] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:09.348693 containerd[1582]: time="2024-12-13T01:28:09.348520466Z" level=info msg="TearDown network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\" successfully" Dec 13 01:28:09.348693 containerd[1582]: time="2024-12-13T01:28:09.348561443Z" level=info msg="StopPodSandbox for \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\" returns successfully" Dec 13 01:28:09.349424 containerd[1582]: time="2024-12-13T01:28:09.349387282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7cd8fd-6l2rl,Uid:102b567b-63bd-4f1d-8e44-77806d76c7e6,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.311 [INFO][4605] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.312 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" iface="eth0" netns="/var/run/netns/cni-a58d01c1-8da1-170d-023b-7ac28a2b440a" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.313 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" iface="eth0" netns="/var/run/netns/cni-a58d01c1-8da1-170d-023b-7ac28a2b440a" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.313 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" iface="eth0" netns="/var/run/netns/cni-a58d01c1-8da1-170d-023b-7ac28a2b440a" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.313 [INFO][4605] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.313 [INFO][4605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.339 [INFO][4615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.339 [INFO][4615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.344 [INFO][4615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.348 [WARNING][4615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.348 [INFO][4615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.349 [INFO][4615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:09.354643 containerd[1582]: 2024-12-13 01:28:09.352 [INFO][4605] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:09.355134 containerd[1582]: time="2024-12-13T01:28:09.354878997Z" level=info msg="TearDown network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\" successfully" Dec 13 01:28:09.355134 containerd[1582]: time="2024-12-13T01:28:09.354909254Z" level=info msg="StopPodSandbox for \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\" returns successfully" Dec 13 01:28:09.355661 containerd[1582]: time="2024-12-13T01:28:09.355639334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7cd8fd-s4c26,Uid:31b29d1a-8f94-417a-ad9f-c1ad8f55cdff,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:28:09.398862 systemd[1]: run-netns-cni\x2d47f3c9ca\x2d9e56\x2d5fe4\x2dcd76\x2d2dc27cb286a2.mount: Deactivated successfully. Dec 13 01:28:09.399032 systemd[1]: run-netns-cni\x2da58d01c1\x2d8da1\x2d170d\x2d023b\x2d7ac28a2b440a.mount: Deactivated successfully. Dec 13 01:28:09.419704 kubelet[2789]: E1213 01:28:09.419650 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:09.428050 systemd-networkd[1242]: cali1b30d2846a5: Gained IPv6LL Dec 13 01:28:09.478018 systemd-networkd[1242]: caliecdc50b6717: Link UP Dec 13 01:28:09.478944 systemd-networkd[1242]: caliecdc50b6717: Gained carrier Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.386 [INFO][4631] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.402 [INFO][4631] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0 calico-apiserver-6fb7cd8fd- calico-apiserver 102b567b-63bd-4f1d-8e44-77806d76c7e6 950 0 2024-12-13 01:27:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb7cd8fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fb7cd8fd-6l2rl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliecdc50b6717 [] []}} ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-6l2rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.402 [INFO][4631] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-6l2rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.435 [INFO][4657] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" HandleID="k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.446 [INFO][4657] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" HandleID="k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027edc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fb7cd8fd-6l2rl", "timestamp":"2024-12-13 01:28:09.435014584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.446 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.446 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.446 [INFO][4657] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.448 [INFO][4657] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.452 [INFO][4657] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.456 [INFO][4657] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.459 [INFO][4657] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.461 [INFO][4657] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.461 [INFO][4657] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.463 [INFO][4657] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383 Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.466 [INFO][4657] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.472 [INFO][4657] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.472 [INFO][4657] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" host="localhost" Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.472 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:09.490617 containerd[1582]: 2024-12-13 01:28:09.472 [INFO][4657] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" HandleID="k8s-pod-network.28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.491291 containerd[1582]: 2024-12-13 01:28:09.475 [INFO][4631] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-6l2rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0", GenerateName:"calico-apiserver-6fb7cd8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"102b567b-63bd-4f1d-8e44-77806d76c7e6", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7cd8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fb7cd8fd-6l2rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecdc50b6717", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:09.491291 containerd[1582]: 2024-12-13 01:28:09.475 [INFO][4631] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-6l2rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.491291 containerd[1582]: 2024-12-13 01:28:09.475 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecdc50b6717 ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-6l2rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.491291 containerd[1582]: 2024-12-13 01:28:09.479 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-6l2rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.491291 containerd[1582]: 2024-12-13 01:28:09.479 [INFO][4631] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-6l2rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0", GenerateName:"calico-apiserver-6fb7cd8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"102b567b-63bd-4f1d-8e44-77806d76c7e6", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7cd8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383", Pod:"calico-apiserver-6fb7cd8fd-6l2rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecdc50b6717", MAC:"26:3d:f5:d2:20:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:09.491291 containerd[1582]: 2024-12-13 01:28:09.488 [INFO][4631] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-6l2rl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:09.511667 systemd-networkd[1242]: cali35667c5d2c8: Link UP Dec 13 01:28:09.512083 systemd-networkd[1242]: cali35667c5d2c8: Gained carrier Dec 13 01:28:09.514257 containerd[1582]: time="2024-12-13T01:28:09.513786722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:09.514257 containerd[1582]: time="2024-12-13T01:28:09.513867113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:09.514257 containerd[1582]: time="2024-12-13T01:28:09.513881480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:09.514257 containerd[1582]: time="2024-12-13T01:28:09.513990204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.401 [INFO][4641] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.415 [INFO][4641] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0 calico-apiserver-6fb7cd8fd- calico-apiserver 31b29d1a-8f94-417a-ad9f-c1ad8f55cdff 949 0 2024-12-13 01:27:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb7cd8fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fb7cd8fd-s4c26 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35667c5d2c8 [] []}} ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-s4c26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.415 [INFO][4641] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-s4c26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.456 [INFO][4665] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" HandleID="k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.463 [INFO][4665] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" HandleID="k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003099d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fb7cd8fd-s4c26", "timestamp":"2024-12-13 01:28:09.456453695 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.464 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.472 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.472 [INFO][4665] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.474 [INFO][4665] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.479 [INFO][4665] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.483 [INFO][4665] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.489 [INFO][4665] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.492 [INFO][4665] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.492 [INFO][4665] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.494 [INFO][4665] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.499 [INFO][4665] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.506 [INFO][4665] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.506 [INFO][4665] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" host="localhost" Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.506 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:09.528448 containerd[1582]: 2024-12-13 01:28:09.506 [INFO][4665] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" HandleID="k8s-pod-network.5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.529071 containerd[1582]: 2024-12-13 01:28:09.509 [INFO][4641] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-s4c26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0", GenerateName:"calico-apiserver-6fb7cd8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b29d1a-8f94-417a-ad9f-c1ad8f55cdff", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7cd8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fb7cd8fd-s4c26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35667c5d2c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:09.529071 containerd[1582]: 2024-12-13 01:28:09.509 [INFO][4641] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-s4c26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.529071 containerd[1582]: 2024-12-13 01:28:09.509 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35667c5d2c8 ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-s4c26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.529071 containerd[1582]: 2024-12-13 01:28:09.513 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-s4c26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.529071 containerd[1582]: 2024-12-13 01:28:09.513 [INFO][4641] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-s4c26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0", GenerateName:"calico-apiserver-6fb7cd8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b29d1a-8f94-417a-ad9f-c1ad8f55cdff", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7cd8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb", Pod:"calico-apiserver-6fb7cd8fd-s4c26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35667c5d2c8", MAC:"9e:c2:59:63:fd:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:09.529071 containerd[1582]: 2024-12-13 01:28:09.525 [INFO][4641] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7cd8fd-s4c26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:09.547498 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:28:09.552567 containerd[1582]: time="2024-12-13T01:28:09.552372800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:09.552567 containerd[1582]: time="2024-12-13T01:28:09.552442511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:09.552567 containerd[1582]: time="2024-12-13T01:28:09.552462809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:09.552943 containerd[1582]: time="2024-12-13T01:28:09.552866326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:09.579816 containerd[1582]: time="2024-12-13T01:28:09.579744865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7cd8fd-6l2rl,Uid:102b567b-63bd-4f1d-8e44-77806d76c7e6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383\"" Dec 13 01:28:09.583119 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:28:09.611953 containerd[1582]: time="2024-12-13T01:28:09.611904983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7cd8fd-s4c26,Uid:31b29d1a-8f94-417a-ad9f-c1ad8f55cdff,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb\"" Dec 13 01:28:10.259630 containerd[1582]: time="2024-12-13T01:28:10.259554917Z" level=info msg="StopPodSandbox for \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\"" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.305 [INFO][4824] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.306 [INFO][4824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" iface="eth0" netns="/var/run/netns/cni-e9b9f464-44d1-76df-ce0c-f5c27af772c1" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.306 [INFO][4824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" iface="eth0" netns="/var/run/netns/cni-e9b9f464-44d1-76df-ce0c-f5c27af772c1" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.306 [INFO][4824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" iface="eth0" netns="/var/run/netns/cni-e9b9f464-44d1-76df-ce0c-f5c27af772c1" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.306 [INFO][4824] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.306 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.332 [INFO][4832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.332 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.332 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.338 [WARNING][4832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.338 [INFO][4832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.340 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:10.345051 containerd[1582]: 2024-12-13 01:28:10.342 [INFO][4824] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:10.345490 containerd[1582]: time="2024-12-13T01:28:10.345323147Z" level=info msg="TearDown network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\" successfully" Dec 13 01:28:10.345490 containerd[1582]: time="2024-12-13T01:28:10.345358954Z" level=info msg="StopPodSandbox for \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\" returns successfully" Dec 13 01:28:10.346139 containerd[1582]: time="2024-12-13T01:28:10.346110684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d844b6d79-p5zcv,Uid:131ce79b-ba75-488a-bc92-8c7dd56c5346,Namespace:calico-system,Attempt:1,}" Dec 13 01:28:10.393587 systemd[1]: run-netns-cni\x2de9b9f464\x2d44d1\x2d76df\x2dce0c\x2df5c27af772c1.mount: Deactivated successfully. Dec 13 01:28:10.426747 kubelet[2789]: E1213 01:28:10.426715 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:10.628682 systemd-networkd[1242]: cali73055a67f7e: Link UP Dec 13 01:28:10.629578 systemd-networkd[1242]: cali73055a67f7e: Gained carrier Dec 13 01:28:10.643889 systemd-networkd[1242]: cali35667c5d2c8: Gained IPv6LL Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.385 [INFO][4840] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.394 [INFO][4840] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0 calico-kube-controllers-7d844b6d79- calico-system 131ce79b-ba75-488a-bc92-8c7dd56c5346 965 0 2024-12-13 01:27:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d844b6d79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d844b6d79-p5zcv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali73055a67f7e [] []}} ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Namespace="calico-system" Pod="calico-kube-controllers-7d844b6d79-p5zcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.394 [INFO][4840] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Namespace="calico-system" Pod="calico-kube-controllers-7d844b6d79-p5zcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.451 [INFO][4853] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" HandleID="k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.459 [INFO][4853] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" HandleID="k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d844b6d79-p5zcv", "timestamp":"2024-12-13 01:28:10.45141906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.459 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.459 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.459 [INFO][4853] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.461 [INFO][4853] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.464 [INFO][4853] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.467 [INFO][4853] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.469 [INFO][4853] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.471 [INFO][4853] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.471 [INFO][4853] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.472 [INFO][4853] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348 Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.599 [INFO][4853] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.623 [INFO][4853] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.623 [INFO][4853] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" host="localhost" Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.623 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:10.690567 containerd[1582]: 2024-12-13 01:28:10.623 [INFO][4853] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" HandleID="k8s-pod-network.48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.691315 containerd[1582]: 2024-12-13 01:28:10.626 [INFO][4840] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Namespace="calico-system" Pod="calico-kube-controllers-7d844b6d79-p5zcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0", GenerateName:"calico-kube-controllers-7d844b6d79-", Namespace:"calico-system", SelfLink:"", UID:"131ce79b-ba75-488a-bc92-8c7dd56c5346", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d844b6d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d844b6d79-p5zcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73055a67f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:10.691315 containerd[1582]: 2024-12-13 01:28:10.627 [INFO][4840] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Namespace="calico-system" Pod="calico-kube-controllers-7d844b6d79-p5zcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.691315 containerd[1582]: 2024-12-13 01:28:10.627 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73055a67f7e ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Namespace="calico-system" Pod="calico-kube-controllers-7d844b6d79-p5zcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.691315 containerd[1582]: 2024-12-13 01:28:10.628 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Namespace="calico-system" Pod="calico-kube-controllers-7d844b6d79-p5zcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.691315 containerd[1582]: 2024-12-13 01:28:10.629 [INFO][4840] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Namespace="calico-system" Pod="calico-kube-controllers-7d844b6d79-p5zcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0", GenerateName:"calico-kube-controllers-7d844b6d79-", Namespace:"calico-system", SelfLink:"", UID:"131ce79b-ba75-488a-bc92-8c7dd56c5346", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d844b6d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348", Pod:"calico-kube-controllers-7d844b6d79-p5zcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73055a67f7e", MAC:"ae:a4:55:17:88:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:10.691315 containerd[1582]: 2024-12-13 01:28:10.687 [INFO][4840] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348" Namespace="calico-system" Pod="calico-kube-controllers-7d844b6d79-p5zcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:10.768063 containerd[1582]: time="2024-12-13T01:28:10.767471548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:10.768063 containerd[1582]: time="2024-12-13T01:28:10.768032140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:10.768063 containerd[1582]: time="2024-12-13T01:28:10.768044042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:10.768245 containerd[1582]: time="2024-12-13T01:28:10.768129372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:10.790503 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:28:10.899007 containerd[1582]: time="2024-12-13T01:28:10.898846506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d844b6d79-p5zcv,Uid:131ce79b-ba75-488a-bc92-8c7dd56c5346,Namespace:calico-system,Attempt:1,} returns sandbox id \"48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348\"" Dec 13 01:28:11.258931 containerd[1582]: time="2024-12-13T01:28:11.258878207Z" level=info msg="StopPodSandbox for \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\"" Dec 13 01:28:11.348249 systemd-networkd[1242]: caliecdc50b6717: Gained IPv6LL Dec 13 01:28:11.379783 containerd[1582]: time="2024-12-13T01:28:11.379718977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:11.380714 containerd[1582]: time="2024-12-13T01:28:11.380629131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:28:11.382281 containerd[1582]: time="2024-12-13T01:28:11.382241878Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:11.385271 containerd[1582]: time="2024-12-13T01:28:11.385179131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:11.385806 containerd[1582]: time="2024-12-13T01:28:11.385765175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.152724841s" Dec 13 01:28:11.385877 containerd[1582]: time="2024-12-13T01:28:11.385812684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:28:11.388073 containerd[1582]: time="2024-12-13T01:28:11.387436272Z" level=info msg="CreateContainer within sandbox \"432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:28:11.388497 containerd[1582]: time="2024-12-13T01:28:11.388237762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.371 [INFO][4958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.372 [INFO][4958] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" iface="eth0" netns="/var/run/netns/cni-5c16a70e-6590-ccf6-068b-8cc11335f4a4" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.372 [INFO][4958] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" iface="eth0" netns="/var/run/netns/cni-5c16a70e-6590-ccf6-068b-8cc11335f4a4" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.373 [INFO][4958] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" iface="eth0" netns="/var/run/netns/cni-5c16a70e-6590-ccf6-068b-8cc11335f4a4" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.373 [INFO][4958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.373 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.402 [INFO][4965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.402 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.402 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.408 [WARNING][4965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.408 [INFO][4965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.410 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:11.415251 containerd[1582]: 2024-12-13 01:28:11.412 [INFO][4958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:11.415911 containerd[1582]: time="2024-12-13T01:28:11.415599897Z" level=info msg="TearDown network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\" successfully" Dec 13 01:28:11.415911 containerd[1582]: time="2024-12-13T01:28:11.415643750Z" level=info msg="StopPodSandbox for \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\" returns successfully" Dec 13 01:28:11.416175 kubelet[2789]: E1213 01:28:11.416115 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:11.416764 containerd[1582]: time="2024-12-13T01:28:11.416632984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bw7qj,Uid:17c50d1a-a584-449b-a49a-4f7a961468bb,Namespace:kube-system,Attempt:1,}" Dec 13 01:28:11.418787 systemd[1]: run-netns-cni\x2d5c16a70e\x2d6590\x2dccf6\x2d068b\x2d8cc11335f4a4.mount: Deactivated successfully. Dec 13 01:28:11.490019 containerd[1582]: time="2024-12-13T01:28:11.489973448Z" level=info msg="CreateContainer within sandbox \"432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3e153638dd5ec219dc39b0f3a92ea14fdd434490fca99c92728fff621c0cb2cf\"" Dec 13 01:28:11.490620 containerd[1582]: time="2024-12-13T01:28:11.490592003Z" level=info msg="StartContainer for \"3e153638dd5ec219dc39b0f3a92ea14fdd434490fca99c92728fff621c0cb2cf\"" Dec 13 01:28:11.570669 containerd[1582]: time="2024-12-13T01:28:11.569306328Z" level=info msg="StartContainer for \"3e153638dd5ec219dc39b0f3a92ea14fdd434490fca99c92728fff621c0cb2cf\" returns successfully" Dec 13 01:28:11.620226 systemd-networkd[1242]: cali249921d41f8: Link UP Dec 13 01:28:11.620474 systemd-networkd[1242]: cali249921d41f8: Gained carrier Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.532 [INFO][4983] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.543 [INFO][4983] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--bw7qj-eth0 coredns-76f75df574- kube-system 17c50d1a-a584-449b-a49a-4f7a961468bb 975 0 2024-12-13 01:27:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-bw7qj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali249921d41f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Namespace="kube-system" Pod="coredns-76f75df574-bw7qj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bw7qj-" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.543 [INFO][4983] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Namespace="kube-system" Pod="coredns-76f75df574-bw7qj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.582 [INFO][5013] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" HandleID="k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.590 [INFO][5013] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" HandleID="k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e2dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-bw7qj", "timestamp":"2024-12-13 01:28:11.582099193 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.590 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.590 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.590 [INFO][5013] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.591 [INFO][5013] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.594 [INFO][5013] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.597 [INFO][5013] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.599 [INFO][5013] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.601 [INFO][5013] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.601 [INFO][5013] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.602 [INFO][5013] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.607 [INFO][5013] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.614 [INFO][5013] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.614 [INFO][5013] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" host="localhost" Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.614 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:11.636986 containerd[1582]: 2024-12-13 01:28:11.614 [INFO][5013] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" HandleID="k8s-pod-network.ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.637840 containerd[1582]: 2024-12-13 01:28:11.618 [INFO][4983] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Namespace="kube-system" Pod="coredns-76f75df574-bw7qj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bw7qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bw7qj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"17c50d1a-a584-449b-a49a-4f7a961468bb", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-bw7qj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali249921d41f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:11.637840 containerd[1582]: 2024-12-13 01:28:11.618 [INFO][4983] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Namespace="kube-system" Pod="coredns-76f75df574-bw7qj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.637840 containerd[1582]: 2024-12-13 01:28:11.618 [INFO][4983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali249921d41f8 ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Namespace="kube-system" Pod="coredns-76f75df574-bw7qj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.637840 containerd[1582]: 2024-12-13 01:28:11.621 [INFO][4983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Namespace="kube-system" Pod="coredns-76f75df574-bw7qj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.637840 containerd[1582]: 2024-12-13 01:28:11.621 [INFO][4983] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Namespace="kube-system" Pod="coredns-76f75df574-bw7qj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bw7qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bw7qj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"17c50d1a-a584-449b-a49a-4f7a961468bb", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc", Pod:"coredns-76f75df574-bw7qj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali249921d41f8", MAC:"72:7f:89:d6:df:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:11.637840 containerd[1582]: 2024-12-13 01:28:11.633 [INFO][4983] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc" Namespace="kube-system" Pod="coredns-76f75df574-bw7qj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:11.660301 containerd[1582]: time="2024-12-13T01:28:11.660132827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:11.660301 containerd[1582]: time="2024-12-13T01:28:11.660256560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:11.660969 containerd[1582]: time="2024-12-13T01:28:11.660271879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:11.661089 containerd[1582]: time="2024-12-13T01:28:11.661045306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:11.690288 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:28:11.722029 containerd[1582]: time="2024-12-13T01:28:11.721963013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bw7qj,Uid:17c50d1a-a584-449b-a49a-4f7a961468bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc\"" Dec 13 01:28:11.722891 kubelet[2789]: E1213 01:28:11.722863 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:11.725429 containerd[1582]: time="2024-12-13T01:28:11.725379177Z" level=info msg="CreateContainer within sandbox \"ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:11.753954 containerd[1582]: time="2024-12-13T01:28:11.753891659Z" level=info msg="CreateContainer within sandbox \"ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31e08e84d0bd1e2edc693d4fcdbac8e73c8762f88dff573418cf4f838c91e6ff\"" Dec 13 01:28:11.754970 containerd[1582]: time="2024-12-13T01:28:11.754927540Z" level=info msg="StartContainer for \"31e08e84d0bd1e2edc693d4fcdbac8e73c8762f88dff573418cf4f838c91e6ff\"" Dec 13 01:28:11.815509 containerd[1582]: time="2024-12-13T01:28:11.815460953Z" level=info msg="StartContainer for \"31e08e84d0bd1e2edc693d4fcdbac8e73c8762f88dff573418cf4f838c91e6ff\" returns successfully" Dec 13 01:28:12.383431 kubelet[2789]: I1213 01:28:12.383366 2789 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:28:12.389098 kubelet[2789]: I1213 01:28:12.389052 2789 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:28:12.437103 kubelet[2789]: E1213 01:28:12.436986 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:12.579709 kubelet[2789]: I1213 01:28:12.579663 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-xkv2k" podStartSLOduration=26.883169984 podStartE2EDuration="30.579615445s" podCreationTimestamp="2024-12-13 01:27:42 +0000 UTC" firstStartedPulling="2024-12-13 01:28:07.689745155 +0000 UTC m=+45.547238054" lastFinishedPulling="2024-12-13 01:28:11.386190616 +0000 UTC m=+49.243683515" observedRunningTime="2024-12-13 01:28:12.579014029 +0000 UTC m=+50.436506938" watchObservedRunningTime="2024-12-13 01:28:12.579615445 +0000 UTC m=+50.437108344" Dec 13 01:28:12.628063 systemd-networkd[1242]: cali73055a67f7e: Gained IPv6LL Dec 13 01:28:12.633406 kubelet[2789]: I1213 01:28:12.633360 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:12.634401 kubelet[2789]: E1213 01:28:12.634220 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:12.696186 kubelet[2789]: I1213 01:28:12.696130 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bw7qj" podStartSLOduration=37.696087006 podStartE2EDuration="37.696087006s" podCreationTimestamp="2024-12-13 01:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:12.680187064 +0000 UTC m=+50.537679953" watchObservedRunningTime="2024-12-13 01:28:12.696087006 +0000 UTC m=+50.553579906" Dec 13 01:28:13.011991 systemd-networkd[1242]: cali249921d41f8: Gained IPv6LL Dec 13 01:28:13.141863 kernel: bpftool[5173]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:28:13.397009 systemd-networkd[1242]: vxlan.calico: Link UP Dec 13 01:28:13.397020 systemd-networkd[1242]: vxlan.calico: Gained carrier Dec 13 01:28:13.439723 kubelet[2789]: E1213 01:28:13.439691 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:13.441179 kubelet[2789]: E1213 01:28:13.439818 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:13.645662 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:56640.service - OpenSSH per-connection server daemon (10.0.0.1:56640). Dec 13 01:28:13.694341 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 56640 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:13.696361 sshd[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:13.701832 systemd-logind[1557]: New session 14 of user core. Dec 13 01:28:13.711273 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:28:13.915775 sshd[5256]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:13.920505 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:56640.service: Deactivated successfully. Dec 13 01:28:13.924520 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:28:13.925488 systemd-logind[1557]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:28:13.927369 systemd-logind[1557]: Removed session 14. Dec 13 01:28:14.441663 kubelet[2789]: E1213 01:28:14.441607 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:14.483950 systemd-networkd[1242]: vxlan.calico: Gained IPv6LL Dec 13 01:28:15.416575 containerd[1582]: time="2024-12-13T01:28:15.416519733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:15.460310 containerd[1582]: time="2024-12-13T01:28:15.460218094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:28:15.490317 containerd[1582]: time="2024-12-13T01:28:15.490237760Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:15.509188 containerd[1582]: time="2024-12-13T01:28:15.509119908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:15.509995 containerd[1582]: time="2024-12-13T01:28:15.509952939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.121666736s" Dec 13 01:28:15.510057 containerd[1582]: time="2024-12-13T01:28:15.509993828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:28:15.510863 containerd[1582]: time="2024-12-13T01:28:15.510831688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:15.511933 containerd[1582]: time="2024-12-13T01:28:15.511905796Z" level=info msg="CreateContainer within sandbox \"28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:15.570280 containerd[1582]: time="2024-12-13T01:28:15.570219642Z" level=info msg="CreateContainer within sandbox \"28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"150871131c86219b080419324965f039026dd63a04f950a010b43f1243c82f52\"" Dec 13 01:28:15.571350 containerd[1582]: time="2024-12-13T01:28:15.571236619Z" level=info msg="StartContainer for \"150871131c86219b080419324965f039026dd63a04f950a010b43f1243c82f52\"" Dec 13 01:28:15.667089 containerd[1582]: time="2024-12-13T01:28:15.666808091Z" level=info msg="StartContainer for \"150871131c86219b080419324965f039026dd63a04f950a010b43f1243c82f52\" returns successfully" Dec 13 01:28:15.932496 containerd[1582]: time="2024-12-13T01:28:15.932305123Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:15.946975 containerd[1582]: time="2024-12-13T01:28:15.946250162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:28:15.948692 containerd[1582]: time="2024-12-13T01:28:15.948638381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 437.774018ms" Dec 13 01:28:15.948692 containerd[1582]: time="2024-12-13T01:28:15.948685833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:28:15.949487 containerd[1582]: time="2024-12-13T01:28:15.949453287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:28:15.951630 containerd[1582]: time="2024-12-13T01:28:15.951580962Z" level=info msg="CreateContainer within sandbox \"5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:15.971212 containerd[1582]: time="2024-12-13T01:28:15.971154396Z" level=info msg="CreateContainer within sandbox \"5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"211f739eeabf36b8fc768b56771b2fa61eb07448bde1d65ed3f36e6c19c3a16d\"" Dec 13 01:28:15.972974 containerd[1582]: time="2024-12-13T01:28:15.972918969Z" level=info msg="StartContainer for \"211f739eeabf36b8fc768b56771b2fa61eb07448bde1d65ed3f36e6c19c3a16d\"" Dec 13 01:28:16.070414 containerd[1582]: time="2024-12-13T01:28:16.070357499Z" level=info msg="StartContainer for \"211f739eeabf36b8fc768b56771b2fa61eb07448bde1d65ed3f36e6c19c3a16d\" returns successfully" Dec 13 01:28:16.614423 kubelet[2789]: I1213 01:28:16.614362 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-6l2rl" podStartSLOduration=28.684764084 podStartE2EDuration="34.614192731s" podCreationTimestamp="2024-12-13 01:27:42 +0000 UTC" firstStartedPulling="2024-12-13 01:28:09.580950487 +0000 UTC m=+47.438443376" lastFinishedPulling="2024-12-13 01:28:15.510379123 +0000 UTC m=+53.367872023" observedRunningTime="2024-12-13 01:28:16.610385764 +0000 UTC m=+54.467878683" watchObservedRunningTime="2024-12-13 01:28:16.614192731 +0000 UTC m=+54.471685630" Dec 13 01:28:17.461515 kubelet[2789]: I1213 01:28:17.461456 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:17.564866 kubelet[2789]: I1213 01:28:17.562890 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fb7cd8fd-s4c26" podStartSLOduration=29.226916142 podStartE2EDuration="35.562827601s" podCreationTimestamp="2024-12-13 01:27:42 +0000 UTC" firstStartedPulling="2024-12-13 01:28:09.61316046 +0000 UTC m=+47.470653359" lastFinishedPulling="2024-12-13 01:28:15.949071919 +0000 UTC m=+53.806564818" observedRunningTime="2024-12-13 01:28:16.840914357 +0000 UTC m=+54.698407266" watchObservedRunningTime="2024-12-13 01:28:17.562827601 +0000 UTC m=+55.420320500" Dec 13 01:28:18.933545 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:36568.service - OpenSSH per-connection server daemon (10.0.0.1:36568). Dec 13 01:28:19.067200 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 36568 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:19.070455 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:19.082065 systemd-logind[1557]: New session 15 of user core. Dec 13 01:28:19.097628 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:28:19.358038 sshd[5418]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:19.367069 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:36568.service: Deactivated successfully. Dec 13 01:28:19.370320 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:28:19.370750 systemd-logind[1557]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:28:19.376577 systemd-logind[1557]: Removed session 15. Dec 13 01:28:19.476511 systemd-resolved[1459]: Under memory pressure, flushing caches. Dec 13 01:28:19.477918 systemd-journald[1159]: Under memory pressure, flushing caches. Dec 13 01:28:19.476570 systemd-resolved[1459]: Flushed all caches. Dec 13 01:28:19.511202 containerd[1582]: time="2024-12-13T01:28:19.511127696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:19.512915 containerd[1582]: time="2024-12-13T01:28:19.512839246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:28:19.515195 containerd[1582]: time="2024-12-13T01:28:19.515125614Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:19.522297 containerd[1582]: time="2024-12-13T01:28:19.521504451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:19.522513 containerd[1582]: time="2024-12-13T01:28:19.522346905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.572860996s" Dec 13 01:28:19.522513 containerd[1582]: time="2024-12-13T01:28:19.522413444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:28:19.550317 containerd[1582]: time="2024-12-13T01:28:19.549819207Z" level=info msg="CreateContainer within sandbox \"48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:28:19.602365 containerd[1582]: time="2024-12-13T01:28:19.599263607Z" level=info msg="CreateContainer within sandbox \"48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"53ee8194b2da3b4279051facbcc21bd8db61e2bce6736fcbb7d8c831a20220b6\"" Dec 13 01:28:19.602365 containerd[1582]: time="2024-12-13T01:28:19.602032525Z" level=info msg="StartContainer for \"53ee8194b2da3b4279051facbcc21bd8db61e2bce6736fcbb7d8c831a20220b6\"" Dec 13 01:28:19.743213 containerd[1582]: time="2024-12-13T01:28:19.742970848Z" level=info msg="StartContainer for \"53ee8194b2da3b4279051facbcc21bd8db61e2bce6736fcbb7d8c831a20220b6\" returns successfully" Dec 13 01:28:20.605721 kubelet[2789]: I1213 01:28:20.605666 2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d844b6d79-p5zcv" podStartSLOduration=29.983628543000002 podStartE2EDuration="38.605607829s" podCreationTimestamp="2024-12-13 01:27:42 +0000 UTC" firstStartedPulling="2024-12-13 01:28:10.900807136 +0000 UTC m=+48.758300035" lastFinishedPulling="2024-12-13 01:28:19.522786422 +0000 UTC m=+57.380279321" observedRunningTime="2024-12-13 01:28:20.531129465 +0000 UTC m=+58.388622374" watchObservedRunningTime="2024-12-13 01:28:20.605607829 +0000 UTC m=+58.463100728" Dec 13 01:28:21.421949 kubelet[2789]: I1213 01:28:21.421189 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:22.175404 kernel: hrtimer: interrupt took 12991452 ns Dec 13 01:28:22.250660 containerd[1582]: time="2024-12-13T01:28:22.250150369Z" level=info msg="StopPodSandbox for \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\"" Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.331 [WARNING][5537] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0", GenerateName:"calico-kube-controllers-7d844b6d79-", Namespace:"calico-system", SelfLink:"", UID:"131ce79b-ba75-488a-bc92-8c7dd56c5346", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d844b6d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348", Pod:"calico-kube-controllers-7d844b6d79-p5zcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73055a67f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.332 [INFO][5537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.332 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" iface="eth0" netns="" Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.332 [INFO][5537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.332 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.372 [INFO][5544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.372 [INFO][5544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.372 [INFO][5544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.380 [WARNING][5544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.380 [INFO][5544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.383 [INFO][5544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:22.390491 containerd[1582]: 2024-12-13 01:28:22.386 [INFO][5537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:22.391205 containerd[1582]: time="2024-12-13T01:28:22.390563021Z" level=info msg="TearDown network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\" successfully" Dec 13 01:28:22.391205 containerd[1582]: time="2024-12-13T01:28:22.390596124Z" level=info msg="StopPodSandbox for \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\" returns successfully" Dec 13 01:28:22.403655 containerd[1582]: time="2024-12-13T01:28:22.403558681Z" level=info msg="RemovePodSandbox for \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\"" Dec 13 01:28:22.415477 containerd[1582]: time="2024-12-13T01:28:22.415319535Z" level=info msg="Forcibly stopping sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\"" Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.494 [WARNING][5565] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0", GenerateName:"calico-kube-controllers-7d844b6d79-", Namespace:"calico-system", SelfLink:"", UID:"131ce79b-ba75-488a-bc92-8c7dd56c5346", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d844b6d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48593a9c4bf62076f1de756f80ac6e5375127c835fc37f3dbb6a596e12751348", Pod:"calico-kube-controllers-7d844b6d79-p5zcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73055a67f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.495 [INFO][5565] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.495 [INFO][5565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" iface="eth0" netns="" Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.495 [INFO][5565] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.495 [INFO][5565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.532 [INFO][5572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.532 [INFO][5572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.532 [INFO][5572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.542 [WARNING][5572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.542 [INFO][5572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" HandleID="k8s-pod-network.3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Workload="localhost-k8s-calico--kube--controllers--7d844b6d79--p5zcv-eth0" Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.546 [INFO][5572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:22.552949 containerd[1582]: 2024-12-13 01:28:22.549 [INFO][5565] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496" Dec 13 01:28:22.553601 containerd[1582]: time="2024-12-13T01:28:22.553023315Z" level=info msg="TearDown network for sandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\" successfully" Dec 13 01:28:22.569787 containerd[1582]: time="2024-12-13T01:28:22.569690287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:22.570000 containerd[1582]: time="2024-12-13T01:28:22.569839565Z" level=info msg="RemovePodSandbox \"3efff15009f4eb46436f278e38bdebe94ae4ade2bd4af1feb071ff2868947496\" returns successfully" Dec 13 01:28:22.570814 containerd[1582]: time="2024-12-13T01:28:22.570694970Z" level=info msg="StopPodSandbox for \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\"" Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.630 [WARNING][5593] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xkv2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57384486-20a7-4c9b-a347-ccc9ae6fe4a9", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82", Pod:"csi-node-driver-xkv2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6086cc40e8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.630 [INFO][5593] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.630 [INFO][5593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" iface="eth0" netns="" Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.630 [INFO][5593] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.630 [INFO][5593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.665 [INFO][5600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.666 [INFO][5600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.666 [INFO][5600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.678 [WARNING][5600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.678 [INFO][5600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.684 [INFO][5600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:22.694931 containerd[1582]: 2024-12-13 01:28:22.690 [INFO][5593] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:22.697130 containerd[1582]: time="2024-12-13T01:28:22.695177747Z" level=info msg="TearDown network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\" successfully" Dec 13 01:28:22.697130 containerd[1582]: time="2024-12-13T01:28:22.696218809Z" level=info msg="StopPodSandbox for \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\" returns successfully" Dec 13 01:28:22.697300 containerd[1582]: time="2024-12-13T01:28:22.697124882Z" level=info msg="RemovePodSandbox for \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\"" Dec 13 01:28:22.697300 containerd[1582]: time="2024-12-13T01:28:22.697159499Z" level=info msg="Forcibly stopping sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\"" Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.762 [WARNING][5621] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xkv2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57384486-20a7-4c9b-a347-ccc9ae6fe4a9", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432f344d81b567f7a22c9386a783feeed3c1fd3b54eaf9921285afa91dacba82", Pod:"csi-node-driver-xkv2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6086cc40e8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.762 [INFO][5621] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.762 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" iface="eth0" netns="" Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.762 [INFO][5621] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.762 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.802 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.802 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.802 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.809 [WARNING][5628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.809 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" HandleID="k8s-pod-network.706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Workload="localhost-k8s-csi--node--driver--xkv2k-eth0" Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.813 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:22.818858 containerd[1582]: 2024-12-13 01:28:22.815 [INFO][5621] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438" Dec 13 01:28:22.818858 containerd[1582]: time="2024-12-13T01:28:22.818812132Z" level=info msg="TearDown network for sandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\" successfully" Dec 13 01:28:22.950709 containerd[1582]: time="2024-12-13T01:28:22.950590053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:22.950709 containerd[1582]: time="2024-12-13T01:28:22.950676569Z" level=info msg="RemovePodSandbox \"706d4995414c403bc565b370027a2660aa000ec4d9b3d07fac7b890063aa3438\" returns successfully" Dec 13 01:28:22.951425 containerd[1582]: time="2024-12-13T01:28:22.951394901Z" level=info msg="StopPodSandbox for \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\"" Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.004 [WARNING][5650] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0", GenerateName:"calico-apiserver-6fb7cd8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b29d1a-8f94-417a-ad9f-c1ad8f55cdff", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7cd8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb", Pod:"calico-apiserver-6fb7cd8fd-s4c26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35667c5d2c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.004 [INFO][5650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.004 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" iface="eth0" netns="" Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.004 [INFO][5650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.004 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.041 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.041 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.041 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.050 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.050 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.052 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:23.058541 containerd[1582]: 2024-12-13 01:28:23.055 [INFO][5650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:23.058541 containerd[1582]: time="2024-12-13T01:28:23.058288307Z" level=info msg="TearDown network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\" successfully" Dec 13 01:28:23.058541 containerd[1582]: time="2024-12-13T01:28:23.058327933Z" level=info msg="StopPodSandbox for \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\" returns successfully" Dec 13 01:28:23.060156 containerd[1582]: time="2024-12-13T01:28:23.059090900Z" level=info msg="RemovePodSandbox for \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\"" Dec 13 01:28:23.060156 containerd[1582]: time="2024-12-13T01:28:23.059141898Z" level=info msg="Forcibly stopping sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\"" Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.138 [WARNING][5679] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0", GenerateName:"calico-apiserver-6fb7cd8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b29d1a-8f94-417a-ad9f-c1ad8f55cdff", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7cd8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5772c1cafede3cc8fc01da0289bc0b608fb78441b9e8292974bd0ef486f1b8bb", Pod:"calico-apiserver-6fb7cd8fd-s4c26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35667c5d2c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.139 [INFO][5679] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.139 [INFO][5679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" iface="eth0" netns="" Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.139 [INFO][5679] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.139 [INFO][5679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.172 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.172 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.172 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.180 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.180 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" HandleID="k8s-pod-network.75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--s4c26-eth0" Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.182 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:23.188984 containerd[1582]: 2024-12-13 01:28:23.185 [INFO][5679] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57" Dec 13 01:28:23.188984 containerd[1582]: time="2024-12-13T01:28:23.188945584Z" level=info msg="TearDown network for sandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\" successfully" Dec 13 01:28:23.293428 containerd[1582]: time="2024-12-13T01:28:23.292610781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:23.293428 containerd[1582]: time="2024-12-13T01:28:23.292719109Z" level=info msg="RemovePodSandbox \"75d5bd5d1b1d30fca2ec4a00bc1edeab9ca2d9abdf268295b2d804f4e1e30d57\" returns successfully" Dec 13 01:28:23.294420 containerd[1582]: time="2024-12-13T01:28:23.293549585Z" level=info msg="StopPodSandbox for \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\"" Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.669 [WARNING][5708] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0", GenerateName:"calico-apiserver-6fb7cd8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"102b567b-63bd-4f1d-8e44-77806d76c7e6", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7cd8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383", Pod:"calico-apiserver-6fb7cd8fd-6l2rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecdc50b6717", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.670 [INFO][5708] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.670 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" iface="eth0" netns="" Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.670 [INFO][5708] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.670 [INFO][5708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.697 [INFO][5716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.697 [INFO][5716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.697 [INFO][5716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.747 [WARNING][5716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.747 [INFO][5716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.750 [INFO][5716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:23.755526 containerd[1582]: 2024-12-13 01:28:23.752 [INFO][5708] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:23.756107 containerd[1582]: time="2024-12-13T01:28:23.755563264Z" level=info msg="TearDown network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\" successfully" Dec 13 01:28:23.756107 containerd[1582]: time="2024-12-13T01:28:23.755594274Z" level=info msg="StopPodSandbox for \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\" returns successfully" Dec 13 01:28:23.756351 containerd[1582]: time="2024-12-13T01:28:23.756308677Z" level=info msg="RemovePodSandbox for \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\"" Dec 13 01:28:23.756392 containerd[1582]: time="2024-12-13T01:28:23.756360517Z" level=info msg="Forcibly stopping sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\"" Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.816 [WARNING][5738] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0", GenerateName:"calico-apiserver-6fb7cd8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"102b567b-63bd-4f1d-8e44-77806d76c7e6", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7cd8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28774451277909e24e80db9bd32ef7ce6282fb6e2b51569a7189b0ae26a2e383", Pod:"calico-apiserver-6fb7cd8fd-6l2rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecdc50b6717", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.816 [INFO][5738] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.816 [INFO][5738] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" iface="eth0" netns="" Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.816 [INFO][5738] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.816 [INFO][5738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.844 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.844 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.844 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.856 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.856 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" HandleID="k8s-pod-network.b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Workload="localhost-k8s-calico--apiserver--6fb7cd8fd--6l2rl-eth0" Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.859 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:23.865186 containerd[1582]: 2024-12-13 01:28:23.862 [INFO][5738] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145" Dec 13 01:28:23.865775 containerd[1582]: time="2024-12-13T01:28:23.865227574Z" level=info msg="TearDown network for sandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\" successfully" Dec 13 01:28:24.009358 containerd[1582]: time="2024-12-13T01:28:24.008907505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:24.009358 containerd[1582]: time="2024-12-13T01:28:24.009017737Z" level=info msg="RemovePodSandbox \"b51ef73c05b3c08cbb748b8f48ab6e34afad099687a8cc751b1dfd66ab750145\" returns successfully" Dec 13 01:28:24.011269 containerd[1582]: time="2024-12-13T01:28:24.010736289Z" level=info msg="StopPodSandbox for \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\"" Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.055 [WARNING][5776] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--k9r28-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f340b06-05bb-4342-b343-8cf6258bf943", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646", Pod:"coredns-76f75df574-k9r28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b30d2846a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.056 [INFO][5776] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.056 [INFO][5776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" iface="eth0" netns="" Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.056 [INFO][5776] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.056 [INFO][5776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.085 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.085 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.085 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.092 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.092 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.095 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:24.100904 containerd[1582]: 2024-12-13 01:28:24.098 [INFO][5776] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:24.100904 containerd[1582]: time="2024-12-13T01:28:24.100936107Z" level=info msg="TearDown network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\" successfully" Dec 13 01:28:24.101508 containerd[1582]: time="2024-12-13T01:28:24.100975462Z" level=info msg="StopPodSandbox for \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\" returns successfully" Dec 13 01:28:24.103982 containerd[1582]: time="2024-12-13T01:28:24.103893979Z" level=info msg="RemovePodSandbox for \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\"" Dec 13 01:28:24.103982 containerd[1582]: time="2024-12-13T01:28:24.103951339Z" level=info msg="Forcibly stopping sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\"" Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.163 [WARNING][5807] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--k9r28-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f340b06-05bb-4342-b343-8cf6258bf943", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12445b2501cc134be0f7ff08366fb2513a1e34f0029a7a341e7c2004bd2b0646", Pod:"coredns-76f75df574-k9r28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b30d2846a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.163 [INFO][5807] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.163 [INFO][5807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" iface="eth0" netns="" Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.164 [INFO][5807] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.164 [INFO][5807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.194 [INFO][5814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.195 [INFO][5814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.195 [INFO][5814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.201 [WARNING][5814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.201 [INFO][5814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" HandleID="k8s-pod-network.9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Workload="localhost-k8s-coredns--76f75df574--k9r28-eth0" Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.204 [INFO][5814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:24.209678 containerd[1582]: 2024-12-13 01:28:24.207 [INFO][5807] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9" Dec 13 01:28:24.210163 containerd[1582]: time="2024-12-13T01:28:24.209739956Z" level=info msg="TearDown network for sandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\" successfully" Dec 13 01:28:24.328054 containerd[1582]: time="2024-12-13T01:28:24.327711679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:24.328054 containerd[1582]: time="2024-12-13T01:28:24.327885272Z" level=info msg="RemovePodSandbox \"9baf383db331b1e89651b32d5f9496249cfa4dcd2baf11fb6623d62c6da3a1b9\" returns successfully" Dec 13 01:28:24.328938 containerd[1582]: time="2024-12-13T01:28:24.328696060Z" level=info msg="StopPodSandbox for \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\"" Dec 13 01:28:24.371332 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:36574.service - OpenSSH per-connection server daemon (10.0.0.1:36574). Dec 13 01:28:24.436830 sshd[5843]: Accepted publickey for core from 10.0.0.1 port 36574 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:24.435272 sshd[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:24.444573 systemd-logind[1557]: New session 16 of user core. Dec 13 01:28:24.456544 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.403 [WARNING][5836] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bw7qj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"17c50d1a-a584-449b-a49a-4f7a961468bb", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc", Pod:"coredns-76f75df574-bw7qj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali249921d41f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.404 [INFO][5836] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.404 [INFO][5836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" iface="eth0" netns="" Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.404 [INFO][5836] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.404 [INFO][5836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.450 [INFO][5846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.450 [INFO][5846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.450 [INFO][5846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.458 [WARNING][5846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.458 [INFO][5846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.465 [INFO][5846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:24.472159 containerd[1582]: 2024-12-13 01:28:24.468 [INFO][5836] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:24.472992 containerd[1582]: time="2024-12-13T01:28:24.472254789Z" level=info msg="TearDown network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\" successfully" Dec 13 01:28:24.472992 containerd[1582]: time="2024-12-13T01:28:24.472289916Z" level=info msg="StopPodSandbox for \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\" returns successfully" Dec 13 01:28:24.473143 containerd[1582]: time="2024-12-13T01:28:24.473096906Z" level=info msg="RemovePodSandbox for \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\"" Dec 13 01:28:24.473143 containerd[1582]: time="2024-12-13T01:28:24.473129690Z" level=info msg="Forcibly stopping sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\"" Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.523 [WARNING][5870] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bw7qj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"17c50d1a-a584-449b-a49a-4f7a961468bb", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef04c2261a907f337fde6185f55fd860327a942ca31cecfb8e3950b79cddafdc", Pod:"coredns-76f75df574-bw7qj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali249921d41f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.523 [INFO][5870] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.523 [INFO][5870] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" iface="eth0" netns="" Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.523 [INFO][5870] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.523 [INFO][5870] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.555 [INFO][5882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.555 [INFO][5882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.555 [INFO][5882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.563 [WARNING][5882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.563 [INFO][5882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" HandleID="k8s-pod-network.e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Workload="localhost-k8s-coredns--76f75df574--bw7qj-eth0" Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.565 [INFO][5882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:24.573595 containerd[1582]: 2024-12-13 01:28:24.569 [INFO][5870] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861" Dec 13 01:28:24.573595 containerd[1582]: time="2024-12-13T01:28:24.573228946Z" level=info msg="TearDown network for sandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\" successfully" Dec 13 01:28:24.578913 containerd[1582]: time="2024-12-13T01:28:24.578715477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:24.578913 containerd[1582]: time="2024-12-13T01:28:24.578875003Z" level=info msg="RemovePodSandbox \"e1641b2c10407cbc594a111c92fb2e98c88b0a93377bf69731f174e5a0e9a861\" returns successfully" Dec 13 01:28:24.637599 sshd[5843]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:24.643271 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:36574.service: Deactivated successfully. Dec 13 01:28:24.646904 systemd-logind[1557]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:28:24.647050 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:28:24.648629 systemd-logind[1557]: Removed session 16. Dec 13 01:28:29.651137 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:38652.service - OpenSSH per-connection server daemon (10.0.0.1:38652). Dec 13 01:28:29.690382 sshd[5920]: Accepted publickey for core from 10.0.0.1 port 38652 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:29.692494 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:29.697673 systemd-logind[1557]: New session 17 of user core. Dec 13 01:28:29.706226 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:28:29.863882 sshd[5920]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:29.868867 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:38652.service: Deactivated successfully. Dec 13 01:28:29.871898 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:28:29.872537 systemd-logind[1557]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:28:29.873617 systemd-logind[1557]: Removed session 17. Dec 13 01:28:32.259856 kubelet[2789]: E1213 01:28:32.259762 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:34.879179 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:38654.service - OpenSSH per-connection server daemon (10.0.0.1:38654). Dec 13 01:28:34.914961 sshd[5943]: Accepted publickey for core from 10.0.0.1 port 38654 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:34.916912 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:34.921781 systemd-logind[1557]: New session 18 of user core. Dec 13 01:28:34.935158 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:28:35.055066 sshd[5943]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:35.065103 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:38662.service - OpenSSH per-connection server daemon (10.0.0.1:38662). Dec 13 01:28:35.066003 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:38654.service: Deactivated successfully. Dec 13 01:28:35.070812 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:28:35.071866 systemd-logind[1557]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:28:35.074096 systemd-logind[1557]: Removed session 18. Dec 13 01:28:35.102706 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 38662 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:35.104473 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:35.109646 systemd-logind[1557]: New session 19 of user core. Dec 13 01:28:35.121420 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:28:35.458230 sshd[5955]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:35.470264 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:38668.service - OpenSSH per-connection server daemon (10.0.0.1:38668). Dec 13 01:28:35.471181 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:38662.service: Deactivated successfully. Dec 13 01:28:35.475310 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:28:35.476262 systemd-logind[1557]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:28:35.478701 systemd-logind[1557]: Removed session 19. Dec 13 01:28:35.510080 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 38668 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:35.512333 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:35.518364 systemd-logind[1557]: New session 20 of user core. Dec 13 01:28:35.537295 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:28:37.336224 sshd[5969]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:37.346360 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:34532.service - OpenSSH per-connection server daemon (10.0.0.1:34532). Dec 13 01:28:37.347440 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:38668.service: Deactivated successfully. Dec 13 01:28:37.351090 systemd-logind[1557]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:28:37.352246 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:28:37.353373 systemd-logind[1557]: Removed session 20. Dec 13 01:28:37.386394 sshd[5995]: Accepted publickey for core from 10.0.0.1 port 34532 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:37.390474 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:37.395425 systemd-logind[1557]: New session 21 of user core. Dec 13 01:28:37.402234 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:28:37.645787 sshd[5995]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:37.659123 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:34542.service - OpenSSH per-connection server daemon (10.0.0.1:34542). Dec 13 01:28:37.659698 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:34532.service: Deactivated successfully. Dec 13 01:28:37.661904 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:28:37.663746 systemd-logind[1557]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:28:37.664683 systemd-logind[1557]: Removed session 21. Dec 13 01:28:37.697271 sshd[6008]: Accepted publickey for core from 10.0.0.1 port 34542 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:37.699267 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:37.705317 systemd-logind[1557]: New session 22 of user core. Dec 13 01:28:37.717441 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:28:37.852536 sshd[6008]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:37.857192 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:34542.service: Deactivated successfully. Dec 13 01:28:37.860731 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:28:37.861856 systemd-logind[1557]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:28:37.863351 systemd-logind[1557]: Removed session 22. Dec 13 01:28:42.259765 kubelet[2789]: E1213 01:28:42.259259 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:42.869931 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:34550.service - OpenSSH per-connection server daemon (10.0.0.1:34550). Dec 13 01:28:42.908751 sshd[6046]: Accepted publickey for core from 10.0.0.1 port 34550 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:42.910989 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:42.919081 systemd-logind[1557]: New session 23 of user core. Dec 13 01:28:42.927381 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:28:43.079308 sshd[6046]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:43.085401 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:34550.service: Deactivated successfully. Dec 13 01:28:43.089676 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:28:43.089743 systemd-logind[1557]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:28:43.091506 systemd-logind[1557]: Removed session 23. Dec 13 01:28:48.089065 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:36220.service - OpenSSH per-connection server daemon (10.0.0.1:36220). Dec 13 01:28:48.125844 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 36220 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:48.127740 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:48.133247 systemd-logind[1557]: New session 24 of user core. Dec 13 01:28:48.140198 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:28:48.266847 sshd[6064]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:48.271308 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:36220.service: Deactivated successfully. Dec 13 01:28:48.273712 systemd-logind[1557]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:28:48.273739 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:28:48.275096 systemd-logind[1557]: Removed session 24. Dec 13 01:28:50.431578 kubelet[2789]: E1213 01:28:50.431535 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:53.284293 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:36224.service - OpenSSH per-connection server daemon (10.0.0.1:36224). Dec 13 01:28:53.408013 sshd[6101]: Accepted publickey for core from 10.0.0.1 port 36224 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:53.408726 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:53.416042 systemd-logind[1557]: New session 25 of user core. Dec 13 01:28:53.420251 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:28:53.555033 sshd[6101]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:53.562724 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:36224.service: Deactivated successfully. Dec 13 01:28:53.566486 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:28:53.569348 systemd-logind[1557]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:28:53.571371 systemd-logind[1557]: Removed session 25. Dec 13 01:28:58.572296 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:49632.service - OpenSSH per-connection server daemon (10.0.0.1:49632). Dec 13 01:28:58.711143 sshd[6144]: Accepted publickey for core from 10.0.0.1 port 49632 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:58.716410 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:58.730351 systemd-logind[1557]: New session 26 of user core. Dec 13 01:28:58.734713 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:28:59.011542 sshd[6144]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:59.017536 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:49632.service: Deactivated successfully. Dec 13 01:28:59.021753 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:28:59.025234 systemd-logind[1557]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:28:59.029623 systemd-logind[1557]: Removed session 26.