Dec 13 01:32:26.918823 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:32:26.918860 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:32:26.918876 kernel: BIOS-provided physical RAM map: Dec 13 01:32:26.918885 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:32:26.918893 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:32:26.918902 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:32:26.918913 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:32:26.918922 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:32:26.918931 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:32:26.918940 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:32:26.918957 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:32:26.918966 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:32:26.918974 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:32:26.918984 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:32:26.918999 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:32:26.919008 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:32:26.919022 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:32:26.919032 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:32:26.919042 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:32:26.919051 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:32:26.919060 kernel: NX (Execute Disable) protection: active Dec 13 01:32:26.919069 kernel: APIC: Static calls initialized Dec 13 01:32:26.919078 kernel: efi: EFI v2.7 by EDK II Dec 13 01:32:26.919088 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:32:26.919098 kernel: SMBIOS 2.8 present. Dec 13 01:32:26.919107 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:32:26.919117 kernel: Hypervisor detected: KVM Dec 13 01:32:26.919133 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:32:26.919143 kernel: kvm-clock: using sched offset of 4976062459 cycles Dec 13 01:32:26.919155 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:32:26.919165 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:32:26.919192 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:32:26.919203 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:32:26.919213 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:32:26.919222 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:32:26.919232 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:32:26.919247 kernel: Using GB pages for direct mapping Dec 13 01:32:26.919257 kernel: Secure boot disabled Dec 13 01:32:26.919267 kernel: ACPI: Early table checksum verification disabled Dec 13 01:32:26.919277 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:32:26.919297 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:32:26.919308 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:32:26.919318 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:32:26.919333 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:32:26.919344 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:32:26.919354 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:32:26.919365 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:32:26.919375 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:32:26.919385 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:32:26.919396 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:32:26.919410 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:32:26.919420 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:32:26.919430 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:32:26.919440 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:32:26.919451 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:32:26.919461 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:32:26.919471 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:32:26.919485 kernel: No NUMA configuration found Dec 13 01:32:26.919496 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:32:26.919520 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:32:26.919532 kernel: Zone ranges: Dec 13 01:32:26.919542 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:32:26.919552 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:32:26.919562 kernel: Normal empty Dec 13 01:32:26.919572 kernel: Movable zone start for each node Dec 13 01:32:26.919583 kernel: Early memory node ranges Dec 13 01:32:26.919593 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:32:26.919604 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:32:26.919614 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:32:26.919629 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:32:26.919639 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:32:26.919649 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:32:26.919664 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:32:26.919674 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:32:26.919685 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:32:26.919695 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:32:26.919705 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:32:26.919715 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:32:26.919731 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:32:26.919741 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:32:26.919751 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:32:26.919761 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:32:26.919772 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:32:26.919782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:32:26.919793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:32:26.919803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:32:26.919813 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:32:26.919827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:32:26.919838 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:32:26.919849 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:32:26.919859 kernel: TSC deadline timer available Dec 13 01:32:26.919869 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:32:26.919880 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:32:26.919890 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:32:26.919901 kernel: kvm-guest: setup PV sched yield Dec 13 01:32:26.919911 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:32:26.919926 kernel: Booting paravirtualized kernel on KVM Dec 13 01:32:26.919936 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:32:26.919947 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:32:26.919958 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:32:26.919968 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:32:26.919978 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:32:26.919988 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:32:26.919999 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:32:26.920015 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:32:26.920030 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:32:26.920041 kernel: random: crng init done Dec 13 01:32:26.920051 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:32:26.920062 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:32:26.920072 kernel: Fallback order for Node 0: 0 Dec 13 01:32:26.920082 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:32:26.920093 kernel: Policy zone: DMA32 Dec 13 01:32:26.920103 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:32:26.920118 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Dec 13 01:32:26.920129 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:32:26.920139 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:32:26.920149 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:32:26.920160 kernel: Dynamic Preempt: voluntary Dec 13 01:32:26.920208 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:32:26.920228 kernel: rcu: RCU event tracing is enabled. Dec 13 01:32:26.920240 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:32:26.920251 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:32:26.920262 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:32:26.920273 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:32:26.920283 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:32:26.920298 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:32:26.920308 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:32:26.920324 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:32:26.920335 kernel: Console: colour dummy device 80x25 Dec 13 01:32:26.920345 kernel: printk: console [ttyS0] enabled Dec 13 01:32:26.920359 kernel: ACPI: Core revision 20230628 Dec 13 01:32:26.920370 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:32:26.920380 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:32:26.920391 kernel: x2apic enabled Dec 13 01:32:26.920402 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:32:26.920412 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:32:26.920423 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:32:26.920434 kernel: kvm-guest: setup PV IPIs Dec 13 01:32:26.920445 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:32:26.920459 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:32:26.920470 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:32:26.920481 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:32:26.920492 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:32:26.920503 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:32:26.920523 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:32:26.920533 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:32:26.920544 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:32:26.920555 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:32:26.920570 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:32:26.920581 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:32:26.920595 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:32:26.920606 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:32:26.920617 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:32:26.920628 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:32:26.920639 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:32:26.920650 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:32:26.920665 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:32:26.920676 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:32:26.920686 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:32:26.920697 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:32:26.920708 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:32:26.920719 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:32:26.920730 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:32:26.920740 kernel: landlock: Up and running. Dec 13 01:32:26.920751 kernel: SELinux: Initializing. Dec 13 01:32:26.920766 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:32:26.920777 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:32:26.920789 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:32:26.920800 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:32:26.920811 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:32:26.920821 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:32:26.920832 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:32:26.920843 kernel: ... version: 0 Dec 13 01:32:26.920854 kernel: ... bit width: 48 Dec 13 01:32:26.920868 kernel: ... generic registers: 6 Dec 13 01:32:26.920879 kernel: ... value mask: 0000ffffffffffff Dec 13 01:32:26.920889 kernel: ... max period: 00007fffffffffff Dec 13 01:32:26.920900 kernel: ... fixed-purpose events: 0 Dec 13 01:32:26.920910 kernel: ... event mask: 000000000000003f Dec 13 01:32:26.920920 kernel: signal: max sigframe size: 1776 Dec 13 01:32:26.920930 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:32:26.920941 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:32:26.920951 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:32:26.920965 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:32:26.920976 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:32:26.920986 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:32:26.920996 kernel: smpboot: Max logical packages: 1 Dec 13 01:32:26.921007 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:32:26.921017 kernel: devtmpfs: initialized Dec 13 01:32:26.921028 kernel: x86/mm: Memory block size: 128MB Dec 13 01:32:26.921039 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:32:26.921050 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:32:26.921064 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:32:26.921075 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:32:26.921086 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:32:26.921096 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:32:26.921107 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:32:26.921117 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:32:26.921128 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:32:26.921139 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:32:26.921149 kernel: audit: type=2000 audit(1734053545.129:1): state=initialized audit_enabled=0 res=1 Dec 13 01:32:26.921164 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:32:26.921190 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:32:26.921201 kernel: cpuidle: using governor menu Dec 13 01:32:26.921212 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:32:26.921223 kernel: dca service started, version 1.12.1 Dec 13 01:32:26.921234 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:32:26.921245 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:32:26.921256 kernel: PCI: Using configuration type 1 for base access Dec 13 01:32:26.921267 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:32:26.921282 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:32:26.921292 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:32:26.921303 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:32:26.921313 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:32:26.921323 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:32:26.921333 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:32:26.921344 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:32:26.921355 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:32:26.921365 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:32:26.921379 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:32:26.921390 kernel: ACPI: Interpreter enabled Dec 13 01:32:26.921400 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:32:26.921411 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:32:26.921422 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:32:26.921433 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:32:26.921444 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:32:26.921456 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:32:26.921734 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:32:26.921918 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:32:26.922084 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:32:26.922100 kernel: PCI host bridge to bus 0000:00 Dec 13 01:32:26.922297 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:32:26.922452 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:32:26.922611 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:32:26.922767 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:32:26.922918 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:32:26.923069 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:32:26.923243 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:32:26.923443 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:32:26.923616 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:32:26.923759 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:32:26.923908 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:32:26.924048 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:32:26.924211 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:32:26.924357 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:32:26.924555 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:32:26.924711 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:32:26.926142 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:32:26.926325 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:32:26.926476 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:32:26.926614 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:32:26.926741 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:32:26.926867 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:32:26.927017 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:32:26.927153 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:32:26.927306 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:32:26.927433 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:32:26.927569 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:32:26.927713 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:32:26.927840 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:32:26.927983 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:32:26.928116 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:32:26.928306 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:32:26.928449 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:32:26.928608 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:32:26.928620 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:32:26.928628 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:32:26.928636 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:32:26.928649 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:32:26.928656 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:32:26.928664 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:32:26.928672 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:32:26.928679 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:32:26.928687 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:32:26.928695 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:32:26.928702 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:32:26.928709 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:32:26.928720 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:32:26.928728 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:32:26.928735 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:32:26.928743 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:32:26.928751 kernel: iommu: Default domain type: Translated Dec 13 01:32:26.928758 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:32:26.928766 kernel: efivars: Registered efivars operations Dec 13 01:32:26.928774 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:32:26.928781 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:32:26.928792 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:32:26.928800 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:32:26.928807 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:32:26.928815 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:32:26.928943 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:32:26.929069 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:32:26.929215 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:32:26.929227 kernel: vgaarb: loaded Dec 13 01:32:26.929235 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:32:26.929248 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:32:26.929256 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:32:26.929264 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:32:26.929272 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:32:26.929279 kernel: pnp: PnP ACPI init Dec 13 01:32:26.929432 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:32:26.929445 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:32:26.929453 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:32:26.929464 kernel: NET: Registered PF_INET protocol family Dec 13 01:32:26.929472 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:32:26.929480 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:32:26.929488 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:32:26.929496 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:32:26.929504 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:32:26.929519 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:32:26.929527 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:32:26.929535 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:32:26.929546 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:32:26.929554 kernel: NET: Registered PF_XDP protocol family Dec 13 01:32:26.929682 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:32:26.929808 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:32:26.929927 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:32:26.930041 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:32:26.930160 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:32:26.931465 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:32:26.931600 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:32:26.931713 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:32:26.931724 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:32:26.931731 kernel: Initialise system trusted keyrings Dec 13 01:32:26.931739 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:32:26.931747 kernel: Key type asymmetric registered Dec 13 01:32:26.931755 kernel: Asymmetric key parser 'x509' registered Dec 13 01:32:26.931762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:32:26.931770 kernel: io scheduler mq-deadline registered Dec 13 01:32:26.931782 kernel: io scheduler kyber registered Dec 13 01:32:26.931790 kernel: io scheduler bfq registered Dec 13 01:32:26.931798 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:32:26.931806 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:32:26.931814 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:32:26.931822 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:32:26.931829 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:32:26.931838 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:32:26.931845 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:32:26.931856 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:32:26.931864 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:32:26.932026 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:32:26.932039 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:32:26.932155 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:32:26.932298 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:32:26 UTC (1734053546) Dec 13 01:32:26.932417 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:32:26.932431 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:32:26.932439 kernel: efifb: probing for efifb Dec 13 01:32:26.932447 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:32:26.932455 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:32:26.932463 kernel: efifb: scrolling: redraw Dec 13 01:32:26.932471 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:32:26.932479 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:32:26.932503 kernel: fb0: EFI VGA frame buffer device Dec 13 01:32:26.932522 kernel: pstore: Using crash dump compression: deflate Dec 13 01:32:26.932532 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:32:26.932540 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:32:26.932548 kernel: Segment Routing with IPv6 Dec 13 01:32:26.932556 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:32:26.932564 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:32:26.932572 kernel: Key type dns_resolver registered Dec 13 01:32:26.932580 kernel: IPI shorthand broadcast: enabled Dec 13 01:32:26.932588 kernel: sched_clock: Marking stable (965002538, 118553342)->(1139586602, -56030722) Dec 13 01:32:26.932596 kernel: registered taskstats version 1 Dec 13 01:32:26.932604 kernel: Loading compiled-in X.509 certificates Dec 13 01:32:26.932614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:32:26.932622 kernel: Key type .fscrypt registered Dec 13 01:32:26.932630 kernel: Key type fscrypt-provisioning registered Dec 13 01:32:26.932638 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:32:26.932645 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:32:26.932653 kernel: ima: No architecture policies found Dec 13 01:32:26.932661 kernel: clk: Disabling unused clocks Dec 13 01:32:26.932669 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:32:26.932679 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:32:26.932688 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:32:26.932695 kernel: Run /init as init process Dec 13 01:32:26.932703 kernel: with arguments: Dec 13 01:32:26.932711 kernel: /init Dec 13 01:32:26.932719 kernel: with environment: Dec 13 01:32:26.932727 kernel: HOME=/ Dec 13 01:32:26.932734 kernel: TERM=linux Dec 13 01:32:26.932742 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:32:26.932755 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:32:26.932765 systemd[1]: Detected virtualization kvm. Dec 13 01:32:26.932774 systemd[1]: Detected architecture x86-64. Dec 13 01:32:26.932782 systemd[1]: Running in initrd. Dec 13 01:32:26.932796 systemd[1]: No hostname configured, using default hostname. Dec 13 01:32:26.932804 systemd[1]: Hostname set to . Dec 13 01:32:26.932813 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:32:26.932821 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:32:26.932830 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:32:26.932838 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:32:26.932847 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:32:26.932856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:32:26.932867 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:32:26.932876 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:32:26.932886 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:32:26.932895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:32:26.932904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:32:26.932912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:32:26.932921 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:32:26.932932 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:32:26.932940 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:32:26.932949 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:32:26.932957 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:32:26.932966 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:32:26.932974 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:32:26.932983 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:32:26.932991 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:32:26.932999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:32:26.933010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:32:26.933019 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:32:26.933027 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:32:26.933036 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:32:26.933044 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:32:26.933053 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:32:26.933061 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:32:26.933070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:32:26.933083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:26.933092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:32:26.933103 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:32:26.933111 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:32:26.933121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:32:26.933132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:26.933161 systemd-journald[192]: Collecting audit messages is disabled. Dec 13 01:32:26.933200 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:32:26.933213 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:32:26.933222 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:32:26.933231 systemd-journald[192]: Journal started Dec 13 01:32:26.933249 systemd-journald[192]: Runtime Journal (/run/log/journal/a64c0992906f45f486f334fd6de25074) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:32:26.909422 systemd-modules-load[193]: Inserted module 'overlay' Dec 13 01:32:26.936503 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:32:26.937471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:32:26.946245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:32:26.948398 systemd-modules-load[193]: Inserted module 'br_netfilter' Dec 13 01:32:26.949213 kernel: Bridge firewalling registered Dec 13 01:32:26.949745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:32:26.951235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:32:26.959360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:26.960128 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:26.965151 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:32:26.965994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:32:26.977631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:26.985134 dracut-cmdline[223]: dracut-dracut-053 Dec 13 01:32:26.986311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:32:26.991226 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:32:27.026634 systemd-resolved[231]: Positive Trust Anchors: Dec 13 01:32:27.026658 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:32:27.026689 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:32:27.029969 systemd-resolved[231]: Defaulting to hostname 'linux'. Dec 13 01:32:27.031527 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:32:27.036565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:27.097200 kernel: SCSI subsystem initialized Dec 13 01:32:27.110193 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:32:27.125195 kernel: iscsi: registered transport (tcp) Dec 13 01:32:27.146189 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:32:27.146219 kernel: QLogic iSCSI HBA Driver Dec 13 01:32:27.195688 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:32:27.201318 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:32:27.227975 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:32:27.228019 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:32:27.228033 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:32:27.268194 kernel: raid6: avx2x4 gen() 30310 MB/s Dec 13 01:32:27.285186 kernel: raid6: avx2x2 gen() 30739 MB/s Dec 13 01:32:27.302287 kernel: raid6: avx2x1 gen() 25460 MB/s Dec 13 01:32:27.302300 kernel: raid6: using algorithm avx2x2 gen() 30739 MB/s Dec 13 01:32:27.320284 kernel: raid6: .... xor() 19800 MB/s, rmw enabled Dec 13 01:32:27.320306 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:32:27.340197 kernel: xor: automatically using best checksumming function avx Dec 13 01:32:27.497203 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:32:27.509250 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:32:27.526385 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:27.538983 systemd-udevd[413]: Using default interface naming scheme 'v255'. Dec 13 01:32:27.543997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:27.558326 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:32:27.573060 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Dec 13 01:32:27.604437 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:32:27.611301 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:32:27.676815 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:27.687383 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:32:27.702538 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:32:27.704955 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:32:27.708771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:27.710206 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:32:27.722083 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:32:27.742936 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:32:27.743211 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:32:27.743235 kernel: GPT:9289727 != 19775487 Dec 13 01:32:27.743271 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:32:27.743288 kernel: GPT:9289727 != 19775487 Dec 13 01:32:27.743307 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:32:27.743324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:32:27.743340 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:32:27.722327 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:32:27.737915 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:32:27.751196 kernel: libata version 3.00 loaded. Dec 13 01:32:27.751787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:32:27.751942 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:27.754577 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:32:27.760366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:32:27.764145 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:32:27.764183 kernel: AES CTR mode by8 optimization enabled Dec 13 01:32:27.761649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:27.765460 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:27.772991 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Dec 13 01:32:27.775632 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:32:27.801652 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:32:27.801675 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:32:27.801883 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:32:27.802073 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (472) Dec 13 01:32:27.802091 kernel: scsi host0: ahci Dec 13 01:32:27.802324 kernel: scsi host1: ahci Dec 13 01:32:27.802615 kernel: scsi host2: ahci Dec 13 01:32:27.802834 kernel: scsi host3: ahci Dec 13 01:32:27.803044 kernel: scsi host4: ahci Dec 13 01:32:27.803266 kernel: scsi host5: ahci Dec 13 01:32:27.803455 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:32:27.803477 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:32:27.803504 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:32:27.803519 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:32:27.803533 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:32:27.803547 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:32:27.781497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:27.799628 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:32:27.819753 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:32:27.824749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:32:27.829721 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:32:27.830157 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:32:27.845313 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:32:27.847864 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:32:27.849144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:27.851772 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:27.854921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:27.870535 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:27.874495 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:32:27.896200 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:27.951075 disk-uuid[553]: Primary Header is updated. Dec 13 01:32:27.951075 disk-uuid[553]: Secondary Entries is updated. Dec 13 01:32:27.951075 disk-uuid[553]: Secondary Header is updated. Dec 13 01:32:27.966720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:32:27.970187 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:32:28.109206 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:32:28.109265 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:32:28.109279 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:32:28.110195 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:32:28.111192 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:32:28.112198 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:32:28.112230 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:32:28.113327 kernel: ata3.00: applying bridge limits Dec 13 01:32:28.114201 kernel: ata3.00: configured for UDMA/100 Dec 13 01:32:28.114225 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:32:28.164421 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:32:28.186380 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:32:28.186407 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:32:28.977214 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:32:28.977591 disk-uuid[568]: The operation has completed successfully. Dec 13 01:32:29.008729 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:32:29.008853 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:32:29.031556 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:32:29.035242 sh[597]: Success Dec 13 01:32:29.049214 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:32:29.083084 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:32:29.099146 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:32:29.102114 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:32:29.116185 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:32:29.116243 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:32:29.116259 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:32:29.116275 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:32:29.116914 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:32:29.122392 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:32:29.124490 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:32:29.132307 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:32:29.134269 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:32:29.144432 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:29.144488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:32:29.144505 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:32:29.148203 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:32:29.157971 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:32:29.160152 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:29.170028 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:32:29.178725 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:32:29.238703 ignition[689]: Ignition 2.19.0 Dec 13 01:32:29.238718 ignition[689]: Stage: fetch-offline Dec 13 01:32:29.238762 ignition[689]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:29.238777 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:29.238911 ignition[689]: parsed url from cmdline: "" Dec 13 01:32:29.238916 ignition[689]: no config URL provided Dec 13 01:32:29.238924 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:32:29.238936 ignition[689]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:32:29.238970 ignition[689]: op(1): [started] loading QEMU firmware config module Dec 13 01:32:29.238977 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:32:29.248783 ignition[689]: op(1): [finished] loading QEMU firmware config module Dec 13 01:32:29.272221 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:32:29.285614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:32:29.292307 ignition[689]: parsing config with SHA512: 46d8b13003f05beda2617bf834b42b95233710a0502eb181db001d92b3c9613443a53582231f46facbd6c14a47554f9be212eab8bddf6e4f41c84d309a52754c Dec 13 01:32:29.297125 unknown[689]: fetched base config from "system" Dec 13 01:32:29.297141 unknown[689]: fetched user config from "qemu" Dec 13 01:32:29.299359 ignition[689]: fetch-offline: fetch-offline passed Dec 13 01:32:29.300301 ignition[689]: Ignition finished successfully Dec 13 01:32:29.303040 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:32:29.312148 systemd-networkd[785]: lo: Link UP Dec 13 01:32:29.312160 systemd-networkd[785]: lo: Gained carrier Dec 13 01:32:29.314463 systemd-networkd[785]: Enumeration completed Dec 13 01:32:29.314607 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:32:29.315034 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:29.315040 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:32:29.317129 systemd[1]: Reached target network.target - Network. Dec 13 01:32:29.317678 systemd-networkd[785]: eth0: Link UP Dec 13 01:32:29.317684 systemd-networkd[785]: eth0: Gained carrier Dec 13 01:32:29.317694 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:29.319300 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:32:29.333432 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:32:29.343262 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:32:29.354073 ignition[789]: Ignition 2.19.0 Dec 13 01:32:29.354085 ignition[789]: Stage: kargs Dec 13 01:32:29.354330 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:29.354347 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:29.355417 ignition[789]: kargs: kargs passed Dec 13 01:32:29.355493 ignition[789]: Ignition finished successfully Dec 13 01:32:29.358963 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:32:29.372360 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:32:29.390861 ignition[798]: Ignition 2.19.0 Dec 13 01:32:29.390878 ignition[798]: Stage: disks Dec 13 01:32:29.391056 ignition[798]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:29.391069 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:29.394137 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:32:29.391833 ignition[798]: disks: disks passed Dec 13 01:32:29.396015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:32:29.391883 ignition[798]: Ignition finished successfully Dec 13 01:32:29.397861 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:32:29.399076 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:32:29.400665 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:32:29.401055 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:32:29.412352 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:32:29.427347 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:32:29.434732 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:32:29.447296 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:32:29.539188 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:32:29.539490 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:32:29.540965 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:32:29.552300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:32:29.554444 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:32:29.555722 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:32:29.555767 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:32:29.567998 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Dec 13 01:32:29.568039 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:29.568056 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:32:29.568071 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:32:29.555793 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:32:29.562572 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:32:29.568956 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:32:29.573554 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:32:29.575621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:32:29.607041 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:32:29.611270 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:32:29.615102 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:32:29.620399 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:32:29.715017 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:32:29.726292 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:32:29.728415 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:32:29.737226 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:29.755956 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:32:29.837293 ignition[929]: INFO : Ignition 2.19.0 Dec 13 01:32:29.837293 ignition[929]: INFO : Stage: mount Dec 13 01:32:29.839156 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:29.839156 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:29.841496 ignition[929]: INFO : mount: mount passed Dec 13 01:32:29.842281 ignition[929]: INFO : Ignition finished successfully Dec 13 01:32:29.845057 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:32:29.856279 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:32:30.114528 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:32:30.126356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:32:30.133826 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Dec 13 01:32:30.135904 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:30.135921 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:32:30.135932 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:32:30.139187 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:32:30.140600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:32:30.168864 ignition[959]: INFO : Ignition 2.19.0 Dec 13 01:32:30.168864 ignition[959]: INFO : Stage: files Dec 13 01:32:30.170557 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:30.170557 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:30.173278 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:32:30.174615 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:32:30.174615 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:32:30.178130 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:32:30.179535 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:32:30.180889 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:32:30.180037 unknown[959]: wrote ssh authorized keys file for user: core Dec 13 01:32:30.183487 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:32:30.183487 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:32:30.240389 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:32:30.459591 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:32:30.459591 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:32:30.463331 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:32:30.465016 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:32:30.470261 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:32:30.471992 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:32:30.473753 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:32:30.475523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:32:30.477283 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:32:30.479234 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:32:30.481224 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:32:30.483037 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:32:30.485602 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:32:30.488070 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:32:30.490191 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:32:30.582315 systemd-networkd[785]: eth0: Gained IPv6LL Dec 13 01:32:30.981022 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:32:31.576716 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:32:31.576716 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:32:31.580803 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:32:31.583316 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:32:31.583316 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:32:31.583316 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:32:31.588196 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:32:31.590392 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:32:31.590392 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:32:31.593528 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:32:31.621767 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:32:31.630303 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:32:31.631991 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:32:31.631991 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:32:31.634756 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:32:31.636265 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:32:31.638055 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:32:31.639724 ignition[959]: INFO : files: files passed Dec 13 01:32:31.640463 ignition[959]: INFO : Ignition finished successfully Dec 13 01:32:31.644191 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:32:31.651354 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:32:31.653862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:32:31.655796 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:32:31.655934 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:32:31.670150 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:32:31.674065 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:31.674065 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:31.677347 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:31.681099 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:32:31.681730 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:32:31.694346 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:32:31.717955 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:32:31.718083 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:32:31.720325 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:32:31.722352 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:32:31.724354 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:32:31.733315 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:32:31.747037 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:32:31.761338 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:32:31.770512 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:31.771795 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:31.774028 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:32:31.776026 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:32:31.776140 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:32:31.778383 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:32:31.780106 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:32:31.782151 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:32:31.784200 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:32:31.786299 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:32:31.788468 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:32:31.790619 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:32:31.792915 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:32:31.794924 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:32:31.797124 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:32:31.798910 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:32:31.799027 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:32:31.801197 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:32:31.802822 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:32:31.804902 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:32:31.805010 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:32:31.807222 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:32:31.807340 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:32:31.809540 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:32:31.809653 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:32:31.811681 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:32:31.813420 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:32:31.819268 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:32:31.821115 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:32:31.822970 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:32:31.825310 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:32:31.825423 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:32:31.827125 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:32:31.827235 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:32:31.829015 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:32:31.829137 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:32:31.831028 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:32:31.831138 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:32:31.844321 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:32:31.845910 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:32:31.847036 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:32:31.847192 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:31.849220 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:32:31.849382 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:32:31.854320 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:32:31.854489 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:32:31.858541 ignition[1012]: INFO : Ignition 2.19.0 Dec 13 01:32:31.858541 ignition[1012]: INFO : Stage: umount Dec 13 01:32:31.860232 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:31.860232 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:32:31.860232 ignition[1012]: INFO : umount: umount passed Dec 13 01:32:31.860232 ignition[1012]: INFO : Ignition finished successfully Dec 13 01:32:31.865960 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:32:31.866094 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:32:31.866767 systemd[1]: Stopped target network.target - Network. Dec 13 01:32:31.869101 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:32:31.869159 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:32:31.870746 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:32:31.870796 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:32:31.872582 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:32:31.872637 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:32:31.872900 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:32:31.872944 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:32:31.876226 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:32:31.878400 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:32:31.881147 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:32:31.881605 systemd-networkd[785]: eth0: DHCPv6 lease lost Dec 13 01:32:31.887992 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:32:31.888157 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:32:31.890876 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:32:31.890948 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:32:31.895851 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:32:31.896002 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:32:31.897069 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:32:31.897113 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:32:31.915338 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:32:31.915765 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:32:31.915834 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:32:31.916144 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:32:31.916210 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:31.919582 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:32:31.919634 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:32:31.919995 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:31.937984 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:32:31.938133 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:32:31.940058 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:32:31.940271 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:31.942308 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:32:31.942370 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:32:31.944073 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:32:31.944116 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:32:31.946335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:32:31.946398 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:32:31.948581 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:32:31.948632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:32:31.950557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:32:31.950612 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:31.962308 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:32:31.963401 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:32:31.963461 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:32:31.965758 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:32:31.965810 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:32:31.967978 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:32:31.968028 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:32:31.970468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:32:31.970519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:31.972963 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:32:31.973071 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:32:32.026103 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:32:32.026282 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:32:32.028713 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:32:32.029945 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:32:32.030011 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:32:32.042316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:32:32.049382 systemd[1]: Switching root. Dec 13 01:32:32.083015 systemd-journald[192]: Journal stopped Dec 13 01:32:33.143081 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Dec 13 01:32:33.143144 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:32:33.143161 kernel: SELinux: policy capability open_perms=1 Dec 13 01:32:33.143188 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:32:33.143204 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:32:33.143215 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:32:33.143231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:32:33.143242 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:32:33.143263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:32:33.143274 kernel: audit: type=1403 audit(1734053552.414:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:32:33.143292 systemd[1]: Successfully loaded SELinux policy in 40.807ms. Dec 13 01:32:33.143310 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.990ms. Dec 13 01:32:33.143323 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:32:33.143345 systemd[1]: Detected virtualization kvm. Dec 13 01:32:33.143357 systemd[1]: Detected architecture x86-64. Dec 13 01:32:33.143370 systemd[1]: Detected first boot. Dec 13 01:32:33.143382 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:32:33.143395 zram_generator::config[1056]: No configuration found. Dec 13 01:32:33.143408 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:32:33.143420 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:32:33.143432 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:32:33.143447 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:32:33.143461 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:32:33.143478 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:32:33.143491 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:32:33.143504 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:32:33.143516 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:32:33.143529 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:32:33.143541 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:32:33.143554 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:32:33.143568 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:32:33.143582 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:32:33.143594 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:32:33.143606 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:32:33.143619 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:32:33.143631 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:32:33.143644 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:32:33.143656 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:32:33.143669 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:32:33.143684 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:32:33.143696 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:32:33.143708 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:32:33.143721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:33.143734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:32:33.143746 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:32:33.143758 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:32:33.143771 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:32:33.143788 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:32:33.143802 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:32:33.143815 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:32:33.143827 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:32:33.143839 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:32:33.143852 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:32:33.143864 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:32:33.143876 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:32:33.143888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:33.143903 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:32:33.143915 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:32:33.143927 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:32:33.143940 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:32:33.143952 systemd[1]: Reached target machines.target - Containers. Dec 13 01:32:33.143964 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:32:33.143976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:33.143988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:32:33.144004 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:32:33.144016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:33.144028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:32:33.144040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:33.144052 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:32:33.144064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:33.144077 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:32:33.144089 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:32:33.144103 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:32:33.144115 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:32:33.144128 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:32:33.144140 kernel: fuse: init (API version 7.39) Dec 13 01:32:33.144151 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:32:33.144163 kernel: loop: module loaded Dec 13 01:32:33.144189 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:32:33.144201 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:32:33.144214 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:32:33.144247 systemd-journald[1126]: Collecting audit messages is disabled. Dec 13 01:32:33.144269 systemd-journald[1126]: Journal started Dec 13 01:32:33.144293 systemd-journald[1126]: Runtime Journal (/run/log/journal/a64c0992906f45f486f334fd6de25074) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:32:32.928179 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:32:32.949008 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:32:32.949535 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:32:33.148569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:32:33.148611 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:32:33.148633 systemd[1]: Stopped verity-setup.service. Dec 13 01:32:33.153143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:33.156251 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:32:33.156307 kernel: ACPI: bus type drm_connector registered Dec 13 01:32:33.158479 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:32:33.159732 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:32:33.160948 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:32:33.162057 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:32:33.163347 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:32:33.164620 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:32:33.165859 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:32:33.167394 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:32:33.168977 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:32:33.169160 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:32:33.170767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:33.170950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:33.172412 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:32:33.172598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:32:33.173962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:33.174142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:33.175692 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:32:33.175870 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:32:33.177397 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:33.177580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:33.178950 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:32:33.180372 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:32:33.181883 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:32:33.197771 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:32:33.205348 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:32:33.207999 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:32:33.209163 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:32:33.209220 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:32:33.211248 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:32:33.213656 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:32:33.219395 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:32:33.220702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:33.223020 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:32:33.227901 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:32:33.229303 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:32:33.234318 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:32:33.236514 systemd-journald[1126]: Time spent on flushing to /var/log/journal/a64c0992906f45f486f334fd6de25074 is 28.623ms for 992 entries. Dec 13 01:32:33.236514 systemd-journald[1126]: System Journal (/var/log/journal/a64c0992906f45f486f334fd6de25074) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:32:33.272718 systemd-journald[1126]: Received client request to flush runtime journal. Dec 13 01:32:33.272760 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:32:33.235612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:32:33.238349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:33.245356 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:32:33.248496 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:32:33.251660 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:33.253577 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:32:33.255585 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:32:33.258466 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:32:33.260513 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:32:33.270627 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:32:33.279882 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:32:33.286836 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:32:33.288917 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:32:33.290886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:33.303747 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Dec 13 01:32:33.303765 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Dec 13 01:32:33.304819 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:32:33.307297 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:32:33.308915 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:32:33.311266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:32:33.313110 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:32:33.324498 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:32:33.338196 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 01:32:33.356670 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:32:33.366483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:32:33.377207 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:32:33.386246 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 01:32:33.386270 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 01:32:33.393738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:32:33.424216 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:32:33.436429 kernel: loop4: detected capacity change from 0 to 210664 Dec 13 01:32:33.446614 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 01:32:33.458117 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:32:33.458904 (sd-merge)[1198]: Merged extensions into '/usr'. Dec 13 01:32:33.463210 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:32:33.463229 systemd[1]: Reloading... Dec 13 01:32:33.532206 zram_generator::config[1224]: No configuration found. Dec 13 01:32:33.588984 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:32:33.666157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:33.716576 systemd[1]: Reloading finished in 252 ms. Dec 13 01:32:33.752291 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:32:33.753915 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:32:33.768375 systemd[1]: Starting ensure-sysext.service... Dec 13 01:32:33.770850 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:32:33.779387 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:32:33.779402 systemd[1]: Reloading... Dec 13 01:32:33.794783 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:32:33.795146 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:32:33.796617 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:32:33.797032 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 01:32:33.797116 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 01:32:33.800678 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:32:33.800753 systemd-tmpfiles[1262]: Skipping /boot Dec 13 01:32:33.813476 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:32:33.813500 systemd-tmpfiles[1262]: Skipping /boot Dec 13 01:32:33.839267 zram_generator::config[1289]: No configuration found. Dec 13 01:32:33.951557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:34.001422 systemd[1]: Reloading finished in 221 ms. Dec 13 01:32:34.020059 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:32:34.031630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:32:34.038962 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:34.041541 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:32:34.043991 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:32:34.048878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:32:34.051960 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:34.055606 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:32:34.061567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:34.061879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:34.063419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:34.068478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:34.075779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:34.077103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:34.082296 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:32:34.083492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:34.084671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:34.084884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:34.086720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:34.086888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:34.087962 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Dec 13 01:32:34.088774 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:34.088986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:34.094466 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:32:34.105786 augenrules[1356]: No rules Dec 13 01:32:34.105803 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:32:34.107953 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:34.112472 systemd[1]: Finished ensure-sysext.service. Dec 13 01:32:34.115451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:34.119249 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:34.119419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:34.129386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:34.132345 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:32:34.134425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:34.147386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:34.147819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:34.150356 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:32:34.153843 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:32:34.156351 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:32:34.157725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:34.158102 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:32:34.159881 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:32:34.162597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:34.162778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:34.164589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:34.164774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:34.174618 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:34.174982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:34.182966 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:32:34.183203 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:32:34.184226 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1385) Dec 13 01:32:34.189786 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:32:34.190925 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1385) Dec 13 01:32:34.190789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:32:34.190858 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:32:34.190893 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:32:34.205561 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:32:34.239247 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1374) Dec 13 01:32:34.251622 systemd-networkd[1390]: lo: Link UP Dec 13 01:32:34.252160 systemd-networkd[1390]: lo: Gained carrier Dec 13 01:32:34.255340 systemd-networkd[1390]: Enumeration completed Dec 13 01:32:34.257279 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:32:34.258786 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:34.263392 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:32:34.264200 systemd-networkd[1390]: eth0: Link UP Dec 13 01:32:34.264250 systemd-networkd[1390]: eth0: Gained carrier Dec 13 01:32:34.264312 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:34.269206 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:32:34.269921 systemd-resolved[1332]: Positive Trust Anchors: Dec 13 01:32:34.269941 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:32:34.269994 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:32:34.270372 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:32:34.273945 systemd-resolved[1332]: Defaulting to hostname 'linux'. Dec 13 01:32:34.275996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:32:34.277223 systemd[1]: Reached target network.target - Network. Dec 13 01:32:34.279221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:34.283240 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:32:34.288561 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:32:33.884731 systemd-resolved[1332]: Clock change detected. Flushing caches. Dec 13 01:32:33.892772 systemd-journald[1126]: Time jumped backwards, rotating. Dec 13 01:32:33.884793 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:32:33.897946 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:32:33.884853 systemd-timesyncd[1392]: Initial clock synchronization to Fri 2024-12-13 01:32:33.884675 UTC. Dec 13 01:32:33.884863 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:32:33.887135 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:32:33.896047 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:32:33.911860 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:32:33.921571 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:32:33.922584 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:32:33.922746 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:32:33.922942 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:32:33.932245 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:32:33.994738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:34.002188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:32:34.002662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:34.006869 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:32:34.011035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:34.020265 kernel: kvm_amd: TSC scaling supported Dec 13 01:32:34.020307 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:32:34.020319 kernel: kvm_amd: Nested Paging enabled Dec 13 01:32:34.021252 kernel: kvm_amd: LBR virtualization supported Dec 13 01:32:34.021266 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:32:34.022262 kernel: kvm_amd: Virtual GIF supported Dec 13 01:32:34.042868 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:32:34.071731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:34.079158 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:32:34.090027 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:32:34.099056 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:32:34.134209 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:32:34.135811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:32:34.136998 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:32:34.138199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:32:34.139472 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:32:34.140944 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:32:34.142140 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:32:34.143410 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:32:34.144651 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:32:34.144679 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:32:34.145587 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:32:34.147327 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:32:34.150139 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:32:34.164546 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:32:34.166970 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:32:34.168544 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:32:34.169712 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:32:34.170717 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:32:34.171698 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:32:34.171725 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:32:34.172869 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:32:34.175261 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:32:34.177174 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:32:34.177802 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:32:34.180531 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:32:34.184029 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:32:34.187032 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:32:34.189947 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:32:34.191188 jq[1438]: false Dec 13 01:32:34.192298 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:32:34.194618 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:32:34.204117 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:32:34.205798 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:32:34.206450 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:32:34.207622 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:32:34.208467 extend-filesystems[1439]: Found loop3 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found loop4 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found loop5 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found sr0 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found vda Dec 13 01:32:34.209819 extend-filesystems[1439]: Found vda1 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found vda2 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found vda3 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found usr Dec 13 01:32:34.209819 extend-filesystems[1439]: Found vda4 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found vda6 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found vda7 Dec 13 01:32:34.209819 extend-filesystems[1439]: Found vda9 Dec 13 01:32:34.209819 extend-filesystems[1439]: Checking size of /dev/vda9 Dec 13 01:32:34.234223 extend-filesystems[1439]: Resized partition /dev/vda9 Dec 13 01:32:34.238194 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:32:34.222027 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:32:34.216425 dbus-daemon[1437]: [system] SELinux support is enabled Dec 13 01:32:34.258083 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1366) Dec 13 01:32:34.258108 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:32:34.287980 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:32:34.224362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:32:34.288074 update_engine[1449]: I20241213 01:32:34.284528 1449 main.cc:92] Flatcar Update Engine starting Dec 13 01:32:34.288074 update_engine[1449]: I20241213 01:32:34.286301 1449 update_check_scheduler.cc:74] Next update check in 5m6s Dec 13 01:32:34.228882 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:32:34.241341 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:32:34.241579 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:32:34.288706 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:32:34.288706 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:32:34.288706 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:32:34.298633 jq[1452]: true Dec 13 01:32:34.241997 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:32:34.298884 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Dec 13 01:32:34.242208 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:32:34.246148 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:32:34.246380 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:32:34.300294 jq[1463]: true Dec 13 01:32:34.263563 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:32:34.276315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:32:34.276341 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:32:34.281035 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:32:34.281059 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:32:34.291263 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:32:34.291518 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:32:34.303472 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:32:34.305123 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:32:34.305144 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:32:34.306252 systemd-logind[1445]: New seat seat0. Dec 13 01:32:34.308624 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:32:34.318074 tar[1462]: linux-amd64/helm Dec 13 01:32:34.324096 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:32:34.348273 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:32:34.350488 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:32:34.356601 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:32:34.362799 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:32:34.458202 containerd[1464]: time="2024-12-13T01:32:34.458013579Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:32:34.482888 containerd[1464]: time="2024-12-13T01:32:34.482841330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:34.484603 containerd[1464]: time="2024-12-13T01:32:34.484567978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:34.484603 containerd[1464]: time="2024-12-13T01:32:34.484597734Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:32:34.484650 containerd[1464]: time="2024-12-13T01:32:34.484613293Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:32:34.484816 containerd[1464]: time="2024-12-13T01:32:34.484787750Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:32:34.484863 containerd[1464]: time="2024-12-13T01:32:34.484822005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:34.484969 containerd[1464]: time="2024-12-13T01:32:34.484930318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:34.484999 containerd[1464]: time="2024-12-13T01:32:34.484970283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:34.485206 containerd[1464]: time="2024-12-13T01:32:34.485179525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:34.485206 containerd[1464]: time="2024-12-13T01:32:34.485201867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:34.485246 containerd[1464]: time="2024-12-13T01:32:34.485214631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:34.485246 containerd[1464]: time="2024-12-13T01:32:34.485224650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:34.485341 containerd[1464]: time="2024-12-13T01:32:34.485318496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:34.485579 containerd[1464]: time="2024-12-13T01:32:34.485555360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:34.485728 containerd[1464]: time="2024-12-13T01:32:34.485697567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:34.485728 containerd[1464]: time="2024-12-13T01:32:34.485722884Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:32:34.486432 containerd[1464]: time="2024-12-13T01:32:34.485876863Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:32:34.486432 containerd[1464]: time="2024-12-13T01:32:34.485949619Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:32:34.491335 containerd[1464]: time="2024-12-13T01:32:34.491307066Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:32:34.491381 containerd[1464]: time="2024-12-13T01:32:34.491353523Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:32:34.491381 containerd[1464]: time="2024-12-13T01:32:34.491370054Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:32:34.491416 containerd[1464]: time="2024-12-13T01:32:34.491389801Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:32:34.491416 containerd[1464]: time="2024-12-13T01:32:34.491403457Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:32:34.491635 containerd[1464]: time="2024-12-13T01:32:34.491531627Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:32:34.491801 containerd[1464]: time="2024-12-13T01:32:34.491734839Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:32:34.491922 containerd[1464]: time="2024-12-13T01:32:34.491864321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:32:34.491922 containerd[1464]: time="2024-12-13T01:32:34.491878989Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:32:34.491922 containerd[1464]: time="2024-12-13T01:32:34.491891092Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:32:34.491922 containerd[1464]: time="2024-12-13T01:32:34.491904467Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:32:34.491922 containerd[1464]: time="2024-12-13T01:32:34.491916149Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:32:34.492013 containerd[1464]: time="2024-12-13T01:32:34.491927600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:32:34.492013 containerd[1464]: time="2024-12-13T01:32:34.491955893Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:32:34.492013 containerd[1464]: time="2024-12-13T01:32:34.491970661Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:32:34.492013 containerd[1464]: time="2024-12-13T01:32:34.491982884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:32:34.492013 containerd[1464]: time="2024-12-13T01:32:34.491993985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:32:34.492013 containerd[1464]: time="2024-12-13T01:32:34.492004635Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:32:34.492112 containerd[1464]: time="2024-12-13T01:32:34.492022087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492112 containerd[1464]: time="2024-12-13T01:32:34.492033939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492112 containerd[1464]: time="2024-12-13T01:32:34.492045371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492112 containerd[1464]: time="2024-12-13T01:32:34.492062603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492112 containerd[1464]: time="2024-12-13T01:32:34.492075327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492112 containerd[1464]: time="2024-12-13T01:32:34.492092319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492112 containerd[1464]: time="2024-12-13T01:32:34.492104141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492115863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492128437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492140930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492155187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492167159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492180965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492194811Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492212274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492223014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492234 containerd[1464]: time="2024-12-13T01:32:34.492233243Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:32:34.492391 containerd[1464]: time="2024-12-13T01:32:34.492277556Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:32:34.492391 containerd[1464]: time="2024-12-13T01:32:34.492292164Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:32:34.492391 containerd[1464]: time="2024-12-13T01:32:34.492301752Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:32:34.492391 containerd[1464]: time="2024-12-13T01:32:34.492312312Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:32:34.492391 containerd[1464]: time="2024-12-13T01:32:34.492321379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.492391 containerd[1464]: time="2024-12-13T01:32:34.492332549Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:32:34.492391 containerd[1464]: time="2024-12-13T01:32:34.492342238Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:32:34.492391 containerd[1464]: time="2024-12-13T01:32:34.492351114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:32:34.493082 containerd[1464]: time="2024-12-13T01:32:34.492579863Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:32:34.493082 containerd[1464]: time="2024-12-13T01:32:34.492629516Z" level=info msg="Connect containerd service" Dec 13 01:32:34.493082 containerd[1464]: time="2024-12-13T01:32:34.492698616Z" level=info msg="using legacy CRI server" Dec 13 01:32:34.493082 containerd[1464]: time="2024-12-13T01:32:34.492715969Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:32:34.493082 containerd[1464]: time="2024-12-13T01:32:34.492790368Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:32:34.493614 containerd[1464]: time="2024-12-13T01:32:34.493551936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:32:34.493808 containerd[1464]: time="2024-12-13T01:32:34.493776998Z" level=info msg="Start subscribing containerd event" Dec 13 01:32:34.494034 containerd[1464]: time="2024-12-13T01:32:34.494018912Z" level=info msg="Start recovering state" Dec 13 01:32:34.494166 containerd[1464]: time="2024-12-13T01:32:34.494151350Z" level=info msg="Start event monitor" Dec 13 01:32:34.494438 containerd[1464]: time="2024-12-13T01:32:34.494422529Z" level=info msg="Start snapshots syncer" Dec 13 01:32:34.494489 containerd[1464]: time="2024-12-13T01:32:34.494477913Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:32:34.494531 containerd[1464]: time="2024-12-13T01:32:34.494520753Z" level=info msg="Start streaming server" Dec 13 01:32:34.494664 containerd[1464]: time="2024-12-13T01:32:34.494389928Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:32:34.494759 containerd[1464]: time="2024-12-13T01:32:34.494745074Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:32:34.494929 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:32:34.495548 containerd[1464]: time="2024-12-13T01:32:34.495325923Z" level=info msg="containerd successfully booted in 0.038401s" Dec 13 01:32:34.577156 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:32:34.602947 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:32:34.613163 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:32:34.620925 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:32:34.621186 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:32:34.624284 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:32:34.661533 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:32:34.671217 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:32:34.673432 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:32:34.674895 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:32:34.681031 tar[1462]: linux-amd64/LICENSE Dec 13 01:32:34.681129 tar[1462]: linux-amd64/README.md Dec 13 01:32:34.697562 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:32:35.297175 systemd-networkd[1390]: eth0: Gained IPv6LL Dec 13 01:32:35.300927 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:32:35.302801 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:32:35.318192 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:32:35.321261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:35.323932 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:32:35.344173 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:32:35.344537 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:32:35.346523 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:32:35.353640 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:32:35.969093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:35.971190 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:32:35.975120 systemd[1]: Startup finished in 1.099s (kernel) + 5.705s (initrd) + 4.004s (userspace) = 10.810s. Dec 13 01:32:35.978256 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:36.451098 kubelet[1550]: E1213 01:32:36.450971 1550 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:36.456076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:36.456303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:39.977091 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:32:39.978509 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:51794.service - OpenSSH per-connection server daemon (10.0.0.1:51794). Dec 13 01:32:40.022423 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 51794 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:32:40.024573 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:40.033710 systemd-logind[1445]: New session 1 of user core. Dec 13 01:32:40.035097 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:32:40.047048 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:32:40.059067 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:32:40.062084 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:32:40.071174 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:32:40.184939 systemd[1568]: Queued start job for default target default.target. Dec 13 01:32:40.197322 systemd[1568]: Created slice app.slice - User Application Slice. Dec 13 01:32:40.197351 systemd[1568]: Reached target paths.target - Paths. Dec 13 01:32:40.197366 systemd[1568]: Reached target timers.target - Timers. Dec 13 01:32:40.199157 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:32:40.211730 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:32:40.211950 systemd[1568]: Reached target sockets.target - Sockets. Dec 13 01:32:40.211978 systemd[1568]: Reached target basic.target - Basic System. Dec 13 01:32:40.212035 systemd[1568]: Reached target default.target - Main User Target. Dec 13 01:32:40.212093 systemd[1568]: Startup finished in 133ms. Dec 13 01:32:40.212645 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:32:40.214501 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:32:40.276519 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:51804.service - OpenSSH per-connection server daemon (10.0.0.1:51804). Dec 13 01:32:40.314061 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 51804 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:32:40.315691 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:40.320143 systemd-logind[1445]: New session 2 of user core. Dec 13 01:32:40.334006 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:32:40.388680 sshd[1579]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:40.399886 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:51804.service: Deactivated successfully. Dec 13 01:32:40.402060 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:32:40.403598 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:32:40.405036 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:51818.service - OpenSSH per-connection server daemon (10.0.0.1:51818). Dec 13 01:32:40.406021 systemd-logind[1445]: Removed session 2. Dec 13 01:32:40.449534 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 51818 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:32:40.451228 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:40.455322 systemd-logind[1445]: New session 3 of user core. Dec 13 01:32:40.464993 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:32:40.515035 sshd[1586]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:40.528064 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:51818.service: Deactivated successfully. Dec 13 01:32:40.530208 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:32:40.532061 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:32:40.545122 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:51822.service - OpenSSH per-connection server daemon (10.0.0.1:51822). Dec 13 01:32:40.546491 systemd-logind[1445]: Removed session 3. Dec 13 01:32:40.573816 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 51822 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:32:40.575496 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:40.579648 systemd-logind[1445]: New session 4 of user core. Dec 13 01:32:40.592973 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:32:40.647757 sshd[1593]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:40.658567 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:51822.service: Deactivated successfully. Dec 13 01:32:40.660312 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:32:40.662160 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:32:40.672100 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:51828.service - OpenSSH per-connection server daemon (10.0.0.1:51828). Dec 13 01:32:40.673314 systemd-logind[1445]: Removed session 4. Dec 13 01:32:40.701464 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 51828 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:32:40.703219 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:40.707655 systemd-logind[1445]: New session 5 of user core. Dec 13 01:32:40.716949 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:32:40.776057 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:32:40.776439 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:40.795227 sudo[1603]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:40.797027 sshd[1600]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:40.809712 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:51828.service: Deactivated successfully. Dec 13 01:32:40.811584 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:32:40.813276 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:32:40.814719 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:51832.service - OpenSSH per-connection server daemon (10.0.0.1:51832). Dec 13 01:32:40.815461 systemd-logind[1445]: Removed session 5. Dec 13 01:32:40.848909 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 51832 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:32:40.850489 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:40.854532 systemd-logind[1445]: New session 6 of user core. Dec 13 01:32:40.864967 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:32:40.919286 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:32:40.919632 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:40.923594 sudo[1612]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:40.930489 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:32:40.930942 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:40.946146 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:40.948020 auditctl[1615]: No rules Dec 13 01:32:40.948441 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:32:40.948665 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:40.951722 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:40.982977 augenrules[1633]: No rules Dec 13 01:32:40.984897 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:40.986238 sudo[1611]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:40.988154 sshd[1608]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:41.003673 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:51832.service: Deactivated successfully. Dec 13 01:32:41.005564 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:32:41.007263 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:32:41.018250 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Dec 13 01:32:41.019194 systemd-logind[1445]: Removed session 6. Dec 13 01:32:41.046015 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:32:41.047633 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:41.051571 systemd-logind[1445]: New session 7 of user core. Dec 13 01:32:41.058958 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:32:41.112640 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:32:41.113017 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:41.401054 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:32:41.401236 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:32:41.676125 dockerd[1663]: time="2024-12-13T01:32:41.675097728Z" level=info msg="Starting up" Dec 13 01:32:41.991038 dockerd[1663]: time="2024-12-13T01:32:41.990904088Z" level=info msg="Loading containers: start." Dec 13 01:32:42.100872 kernel: Initializing XFRM netlink socket Dec 13 01:32:42.181149 systemd-networkd[1390]: docker0: Link UP Dec 13 01:32:42.208531 dockerd[1663]: time="2024-12-13T01:32:42.208459993Z" level=info msg="Loading containers: done." Dec 13 01:32:42.224100 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3749923147-merged.mount: Deactivated successfully. Dec 13 01:32:42.226183 dockerd[1663]: time="2024-12-13T01:32:42.226131774Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:32:42.226305 dockerd[1663]: time="2024-12-13T01:32:42.226218065Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:32:42.226364 dockerd[1663]: time="2024-12-13T01:32:42.226339914Z" level=info msg="Daemon has completed initialization" Dec 13 01:32:42.262709 dockerd[1663]: time="2024-12-13T01:32:42.262564167Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:32:42.262803 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:32:42.981533 containerd[1464]: time="2024-12-13T01:32:42.981444905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:32:43.673906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726164881.mount: Deactivated successfully. Dec 13 01:32:44.650961 containerd[1464]: time="2024-12-13T01:32:44.650893457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:44.651700 containerd[1464]: time="2024-12-13T01:32:44.651612075Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 01:32:44.652745 containerd[1464]: time="2024-12-13T01:32:44.652711066Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:44.655350 containerd[1464]: time="2024-12-13T01:32:44.655313106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:44.656433 containerd[1464]: time="2024-12-13T01:32:44.656397099Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.674907169s" Dec 13 01:32:44.656483 containerd[1464]: time="2024-12-13T01:32:44.656432004Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:32:44.680575 containerd[1464]: time="2024-12-13T01:32:44.680526480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:32:46.507662 containerd[1464]: time="2024-12-13T01:32:46.507515687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:46.508526 containerd[1464]: time="2024-12-13T01:32:46.508480526Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 01:32:46.509668 containerd[1464]: time="2024-12-13T01:32:46.509627397Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:46.512599 containerd[1464]: time="2024-12-13T01:32:46.512527896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:46.513722 containerd[1464]: time="2024-12-13T01:32:46.513687381Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.83312819s" Dec 13 01:32:46.513774 containerd[1464]: time="2024-12-13T01:32:46.513721194Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:32:46.538338 containerd[1464]: time="2024-12-13T01:32:46.538289438Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:32:46.706632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:32:46.722174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:46.878043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:46.882535 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:47.159301 kubelet[1897]: E1213 01:32:47.159151 1897 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:47.166297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:47.166523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:47.823231 containerd[1464]: time="2024-12-13T01:32:47.823162885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:47.824020 containerd[1464]: time="2024-12-13T01:32:47.823946194Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 01:32:47.825436 containerd[1464]: time="2024-12-13T01:32:47.825383780Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:47.828477 containerd[1464]: time="2024-12-13T01:32:47.828437346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:47.829545 containerd[1464]: time="2024-12-13T01:32:47.829507724Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.291176216s" Dec 13 01:32:47.829545 containerd[1464]: time="2024-12-13T01:32:47.829541517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:32:47.853678 containerd[1464]: time="2024-12-13T01:32:47.853621866Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:32:48.974982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693453462.mount: Deactivated successfully. Dec 13 01:32:49.239339 containerd[1464]: time="2024-12-13T01:32:49.239144295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:49.240275 containerd[1464]: time="2024-12-13T01:32:49.240202690Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:32:49.241542 containerd[1464]: time="2024-12-13T01:32:49.241516805Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:49.243754 containerd[1464]: time="2024-12-13T01:32:49.243709717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:49.244338 containerd[1464]: time="2024-12-13T01:32:49.244283343Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.390621111s" Dec 13 01:32:49.244338 containerd[1464]: time="2024-12-13T01:32:49.244316776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:32:49.267252 containerd[1464]: time="2024-12-13T01:32:49.267210690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:32:49.872336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666744996.mount: Deactivated successfully. Dec 13 01:32:50.884626 containerd[1464]: time="2024-12-13T01:32:50.884552533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:50.885503 containerd[1464]: time="2024-12-13T01:32:50.885432063Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:32:50.886852 containerd[1464]: time="2024-12-13T01:32:50.886805879Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:50.889495 containerd[1464]: time="2024-12-13T01:32:50.889430411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:50.890535 containerd[1464]: time="2024-12-13T01:32:50.890497963Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.623249742s" Dec 13 01:32:50.890535 containerd[1464]: time="2024-12-13T01:32:50.890532187Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:32:50.914823 containerd[1464]: time="2024-12-13T01:32:50.914782275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:32:51.385285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917697742.mount: Deactivated successfully. Dec 13 01:32:51.391798 containerd[1464]: time="2024-12-13T01:32:51.391739500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:51.392504 containerd[1464]: time="2024-12-13T01:32:51.392424254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:32:51.393590 containerd[1464]: time="2024-12-13T01:32:51.393549604Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:51.395595 containerd[1464]: time="2024-12-13T01:32:51.395553162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:51.396281 containerd[1464]: time="2024-12-13T01:32:51.396234349Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 481.416959ms" Dec 13 01:32:51.396281 containerd[1464]: time="2024-12-13T01:32:51.396275647Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:32:51.418949 containerd[1464]: time="2024-12-13T01:32:51.418811860Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:32:51.984198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600732064.mount: Deactivated successfully. Dec 13 01:32:53.882502 containerd[1464]: time="2024-12-13T01:32:53.882421728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:53.883363 containerd[1464]: time="2024-12-13T01:32:53.883277904Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 01:32:53.884902 containerd[1464]: time="2024-12-13T01:32:53.884860572Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:53.887570 containerd[1464]: time="2024-12-13T01:32:53.887500603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:53.888901 containerd[1464]: time="2024-12-13T01:32:53.888846276Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.469969123s" Dec 13 01:32:53.888901 containerd[1464]: time="2024-12-13T01:32:53.888895569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:32:56.206444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:56.218078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:56.236249 systemd[1]: Reloading requested from client PID 2119 ('systemctl') (unit session-7.scope)... Dec 13 01:32:56.236281 systemd[1]: Reloading... Dec 13 01:32:56.326855 zram_generator::config[2164]: No configuration found. Dec 13 01:32:56.568434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:56.645419 systemd[1]: Reloading finished in 408 ms. Dec 13 01:32:56.697541 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:56.700731 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:32:56.701013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:56.702764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:56.853185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:56.859060 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:56.895410 kubelet[2208]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:56.895410 kubelet[2208]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:56.895410 kubelet[2208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:56.895809 kubelet[2208]: I1213 01:32:56.895432 2208 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:57.390539 kubelet[2208]: I1213 01:32:57.390498 2208 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:32:57.390539 kubelet[2208]: I1213 01:32:57.390530 2208 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:57.390729 kubelet[2208]: I1213 01:32:57.390711 2208 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:32:57.405590 kubelet[2208]: I1213 01:32:57.405557 2208 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:57.407599 kubelet[2208]: E1213 01:32:57.407085 2208 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.421842 kubelet[2208]: I1213 01:32:57.421801 2208 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:57.422112 kubelet[2208]: I1213 01:32:57.422075 2208 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:57.422298 kubelet[2208]: I1213 01:32:57.422103 2208 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:57.422703 kubelet[2208]: I1213 01:32:57.422680 2208 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:57.422703 kubelet[2208]: I1213 01:32:57.422695 2208 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:57.422873 kubelet[2208]: I1213 01:32:57.422851 2208 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:57.423488 kubelet[2208]: I1213 01:32:57.423464 2208 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:32:57.423524 kubelet[2208]: I1213 01:32:57.423489 2208 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:57.423524 kubelet[2208]: I1213 01:32:57.423512 2208 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:57.423565 kubelet[2208]: I1213 01:32:57.423545 2208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:57.423975 kubelet[2208]: W1213 01:32:57.423947 2208 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.424012 kubelet[2208]: E1213 01:32:57.423982 2208 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.424012 kubelet[2208]: W1213 01:32:57.423958 2208 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.424012 kubelet[2208]: E1213 01:32:57.424008 2208 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.426672 kubelet[2208]: I1213 01:32:57.426618 2208 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:57.427890 kubelet[2208]: I1213 01:32:57.427820 2208 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:57.427960 kubelet[2208]: W1213 01:32:57.427948 2208 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:32:57.428701 kubelet[2208]: I1213 01:32:57.428650 2208 server.go:1264] "Started kubelet" Dec 13 01:32:57.430668 kubelet[2208]: I1213 01:32:57.429971 2208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:57.432658 kubelet[2208]: I1213 01:32:57.432506 2208 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:57.434105 kubelet[2208]: I1213 01:32:57.433602 2208 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:32:57.434423 kubelet[2208]: E1213 01:32:57.434344 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810988a9020e00c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:32:57.428623372 +0000 UTC m=+0.564482567,LastTimestamp:2024-12-13 01:32:57.428623372 +0000 UTC m=+0.564482567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:32:57.434727 kubelet[2208]: I1213 01:32:57.434674 2208 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:57.435573 kubelet[2208]: I1213 01:32:57.434972 2208 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:57.435573 kubelet[2208]: I1213 01:32:57.434976 2208 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:57.435573 kubelet[2208]: I1213 01:32:57.435106 2208 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:32:57.435573 kubelet[2208]: I1213 01:32:57.435170 2208 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:32:57.435573 kubelet[2208]: E1213 01:32:57.435442 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" Dec 13 01:32:57.435573 kubelet[2208]: E1213 01:32:57.435508 2208 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:57.435573 kubelet[2208]: W1213 01:32:57.435493 2208 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.435573 kubelet[2208]: E1213 01:32:57.435533 2208 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.436870 kubelet[2208]: I1213 01:32:57.436799 2208 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:57.436870 kubelet[2208]: I1213 01:32:57.436814 2208 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:57.437068 kubelet[2208]: I1213 01:32:57.436907 2208 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:57.452712 kubelet[2208]: I1213 01:32:57.452656 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:57.453978 kubelet[2208]: I1213 01:32:57.453954 2208 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:57.453978 kubelet[2208]: I1213 01:32:57.453970 2208 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:57.454073 kubelet[2208]: I1213 01:32:57.453989 2208 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:57.454185 kubelet[2208]: I1213 01:32:57.454151 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:57.454232 kubelet[2208]: I1213 01:32:57.454192 2208 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:57.454232 kubelet[2208]: I1213 01:32:57.454210 2208 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:32:57.455577 kubelet[2208]: E1213 01:32:57.454794 2208 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:57.455637 kubelet[2208]: W1213 01:32:57.455592 2208 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.455660 kubelet[2208]: E1213 01:32:57.455641 2208 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:57.536778 kubelet[2208]: I1213 01:32:57.536755 2208 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:57.537302 kubelet[2208]: E1213 01:32:57.537273 2208 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Dec 13 01:32:57.555473 kubelet[2208]: E1213 01:32:57.555425 2208 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:32:57.635908 kubelet[2208]: E1213 01:32:57.635873 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" Dec 13 01:32:57.739050 kubelet[2208]: I1213 01:32:57.738959 2208 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:57.739218 kubelet[2208]: E1213 01:32:57.739195 2208 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Dec 13 01:32:57.755892 kubelet[2208]: I1213 01:32:57.755870 2208 policy_none.go:49] "None policy: Start" Dec 13 01:32:57.755931 kubelet[2208]: E1213 01:32:57.755918 2208 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:32:57.756323 kubelet[2208]: I1213 01:32:57.756308 2208 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:57.756372 kubelet[2208]: I1213 01:32:57.756327 2208 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:57.764333 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:32:57.778194 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:32:57.781232 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:32:57.797698 kubelet[2208]: I1213 01:32:57.797678 2208 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:57.797968 kubelet[2208]: I1213 01:32:57.797921 2208 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:32:57.798080 kubelet[2208]: I1213 01:32:57.798060 2208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:57.799716 kubelet[2208]: E1213 01:32:57.799694 2208 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:32:58.037236 kubelet[2208]: E1213 01:32:58.037126 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" Dec 13 01:32:58.140993 kubelet[2208]: I1213 01:32:58.140955 2208 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:58.141311 kubelet[2208]: E1213 01:32:58.141283 2208 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Dec 13 01:32:58.156546 kubelet[2208]: I1213 01:32:58.156502 2208 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:32:58.157420 kubelet[2208]: I1213 01:32:58.157399 2208 topology_manager.go:215] "Topology Admit Handler" podUID="21befe9c7564c6e773aa24f68dbf0432" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:32:58.158214 kubelet[2208]: I1213 01:32:58.158165 2208 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:32:58.163864 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 01:32:58.184768 systemd[1]: Created slice kubepods-burstable-pod21befe9c7564c6e773aa24f68dbf0432.slice - libcontainer container kubepods-burstable-pod21befe9c7564c6e773aa24f68dbf0432.slice. Dec 13 01:32:58.188299 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 01:32:58.240263 kubelet[2208]: I1213 01:32:58.240208 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21befe9c7564c6e773aa24f68dbf0432-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"21befe9c7564c6e773aa24f68dbf0432\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:58.240263 kubelet[2208]: I1213 01:32:58.240277 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:58.240414 kubelet[2208]: I1213 01:32:58.240307 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:58.240414 kubelet[2208]: I1213 01:32:58.240328 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:58.240414 kubelet[2208]: I1213 01:32:58.240346 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21befe9c7564c6e773aa24f68dbf0432-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"21befe9c7564c6e773aa24f68dbf0432\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:58.240414 kubelet[2208]: I1213 01:32:58.240364 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21befe9c7564c6e773aa24f68dbf0432-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"21befe9c7564c6e773aa24f68dbf0432\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:32:58.240414 kubelet[2208]: I1213 01:32:58.240381 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:58.240555 kubelet[2208]: I1213 01:32:58.240400 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:32:58.240555 kubelet[2208]: I1213 01:32:58.240420 2208 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:32:58.482587 kubelet[2208]: E1213 01:32:58.482438 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:58.483255 containerd[1464]: time="2024-12-13T01:32:58.483203807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:58.487465 kubelet[2208]: E1213 01:32:58.487431 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:58.487772 containerd[1464]: time="2024-12-13T01:32:58.487742438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:21befe9c7564c6e773aa24f68dbf0432,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:58.489212 kubelet[2208]: W1213 01:32:58.489134 2208 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:58.489212 kubelet[2208]: E1213 01:32:58.489212 2208 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:58.490227 kubelet[2208]: E1213 01:32:58.490201 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:58.490495 containerd[1464]: time="2024-12-13T01:32:58.490464052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:58.613507 kubelet[2208]: W1213 01:32:58.613437 2208 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:58.613507 kubelet[2208]: E1213 01:32:58.613508 2208 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:58.682318 kubelet[2208]: W1213 01:32:58.682266 2208 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:58.682318 kubelet[2208]: E1213 01:32:58.682306 2208 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:58.838411 kubelet[2208]: E1213 01:32:58.838298 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="1.6s" Dec 13 01:32:58.934502 kubelet[2208]: W1213 01:32:58.934446 2208 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:58.934502 kubelet[2208]: E1213 01:32:58.934499 2208 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:58.942510 kubelet[2208]: I1213 01:32:58.942477 2208 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:32:58.942719 kubelet[2208]: E1213 01:32:58.942686 2208 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Dec 13 01:32:59.536311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836898047.mount: Deactivated successfully. Dec 13 01:32:59.544712 containerd[1464]: time="2024-12-13T01:32:59.544642683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:59.547018 containerd[1464]: time="2024-12-13T01:32:59.546964217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:59.548192 containerd[1464]: time="2024-12-13T01:32:59.548123131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:59.549338 containerd[1464]: time="2024-12-13T01:32:59.549297823Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:59.550278 containerd[1464]: time="2024-12-13T01:32:59.550240181Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:59.550940 containerd[1464]: time="2024-12-13T01:32:59.550870022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:32:59.551886 containerd[1464]: time="2024-12-13T01:32:59.551816948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:59.554053 containerd[1464]: time="2024-12-13T01:32:59.554013297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:59.555118 containerd[1464]: time="2024-12-13T01:32:59.555088052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.064556614s" Dec 13 01:32:59.556242 containerd[1464]: time="2024-12-13T01:32:59.556187414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.072893818s" Dec 13 01:32:59.559758 containerd[1464]: time="2024-12-13T01:32:59.559707686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.071906206s" Dec 13 01:32:59.587773 kubelet[2208]: E1213 01:32:59.587738 2208 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.83:6443: connect: connection refused Dec 13 01:32:59.681626 containerd[1464]: time="2024-12-13T01:32:59.681430539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:59.681626 containerd[1464]: time="2024-12-13T01:32:59.681483709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:59.681626 containerd[1464]: time="2024-12-13T01:32:59.681497134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:59.681626 containerd[1464]: time="2024-12-13T01:32:59.681570332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:59.681626 containerd[1464]: time="2024-12-13T01:32:59.680796110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:59.681626 containerd[1464]: time="2024-12-13T01:32:59.680877763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:59.681626 containerd[1464]: time="2024-12-13T01:32:59.680891819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:59.681626 containerd[1464]: time="2024-12-13T01:32:59.680988731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:59.683278 containerd[1464]: time="2024-12-13T01:32:59.683197964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:59.683278 containerd[1464]: time="2024-12-13T01:32:59.683252386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:59.683463 containerd[1464]: time="2024-12-13T01:32:59.683283434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:59.683463 containerd[1464]: time="2024-12-13T01:32:59.683436762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:59.704009 systemd[1]: Started cri-containerd-d6de0f9b71a771bd7f560a7df4b3685ce18b7fb656eb1b8599bd1354550b989c.scope - libcontainer container d6de0f9b71a771bd7f560a7df4b3685ce18b7fb656eb1b8599bd1354550b989c. Dec 13 01:32:59.708582 systemd[1]: Started cri-containerd-2c4de0a97d0d3fdcb7f37e92ffdd02370b518c83aefb3c27fad3c6dcbf0b3630.scope - libcontainer container 2c4de0a97d0d3fdcb7f37e92ffdd02370b518c83aefb3c27fad3c6dcbf0b3630. Dec 13 01:32:59.710459 systemd[1]: Started cri-containerd-3f63bd7c4bdf7b88816b188ae8f7b5a742606919cbcf326aa4cccd32fad175d5.scope - libcontainer container 3f63bd7c4bdf7b88816b188ae8f7b5a742606919cbcf326aa4cccd32fad175d5. Dec 13 01:32:59.750103 containerd[1464]: time="2024-12-13T01:32:59.750011794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6de0f9b71a771bd7f560a7df4b3685ce18b7fb656eb1b8599bd1354550b989c\"" Dec 13 01:32:59.751703 kubelet[2208]: E1213 01:32:59.751663 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:59.756325 containerd[1464]: time="2024-12-13T01:32:59.756282894Z" level=info msg="CreateContainer within sandbox \"d6de0f9b71a771bd7f560a7df4b3685ce18b7fb656eb1b8599bd1354550b989c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:32:59.756710 containerd[1464]: time="2024-12-13T01:32:59.756565725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:21befe9c7564c6e773aa24f68dbf0432,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f63bd7c4bdf7b88816b188ae8f7b5a742606919cbcf326aa4cccd32fad175d5\"" Dec 13 01:32:59.757315 containerd[1464]: time="2024-12-13T01:32:59.757286256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c4de0a97d0d3fdcb7f37e92ffdd02370b518c83aefb3c27fad3c6dcbf0b3630\"" Dec 13 01:32:59.759064 kubelet[2208]: E1213 01:32:59.759006 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:59.760032 kubelet[2208]: E1213 01:32:59.760012 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:32:59.761276 containerd[1464]: time="2024-12-13T01:32:59.761231745Z" level=info msg="CreateContainer within sandbox \"3f63bd7c4bdf7b88816b188ae8f7b5a742606919cbcf326aa4cccd32fad175d5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:32:59.762501 containerd[1464]: time="2024-12-13T01:32:59.762473304Z" level=info msg="CreateContainer within sandbox \"2c4de0a97d0d3fdcb7f37e92ffdd02370b518c83aefb3c27fad3c6dcbf0b3630\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:32:59.781105 containerd[1464]: time="2024-12-13T01:32:59.781053238Z" level=info msg="CreateContainer within sandbox \"d6de0f9b71a771bd7f560a7df4b3685ce18b7fb656eb1b8599bd1354550b989c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2deb081d84a64fa7dfefa3a4147d039f2b9bfba32cb66c1a23ed2e2d364e8e2f\"" Dec 13 01:32:59.781567 containerd[1464]: time="2024-12-13T01:32:59.781537917Z" level=info msg="StartContainer for \"2deb081d84a64fa7dfefa3a4147d039f2b9bfba32cb66c1a23ed2e2d364e8e2f\"" Dec 13 01:32:59.790227 containerd[1464]: time="2024-12-13T01:32:59.789011051Z" level=info msg="CreateContainer within sandbox \"2c4de0a97d0d3fdcb7f37e92ffdd02370b518c83aefb3c27fad3c6dcbf0b3630\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3af6dcf9e9d086e0febce46651dd299386a789c93c1e5db09643df46249e9e99\"" Dec 13 01:32:59.790227 containerd[1464]: time="2024-12-13T01:32:59.789515197Z" level=info msg="StartContainer for \"3af6dcf9e9d086e0febce46651dd299386a789c93c1e5db09643df46249e9e99\"" Dec 13 01:32:59.791771 containerd[1464]: time="2024-12-13T01:32:59.791728668Z" level=info msg="CreateContainer within sandbox \"3f63bd7c4bdf7b88816b188ae8f7b5a742606919cbcf326aa4cccd32fad175d5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fea44e41922ea167d8b4f86d54da55a005035a9c6e4457c1edf39eb415559c59\"" Dec 13 01:32:59.792134 containerd[1464]: time="2024-12-13T01:32:59.792094284Z" level=info msg="StartContainer for \"fea44e41922ea167d8b4f86d54da55a005035a9c6e4457c1edf39eb415559c59\"" Dec 13 01:32:59.816096 systemd[1]: Started cri-containerd-2deb081d84a64fa7dfefa3a4147d039f2b9bfba32cb66c1a23ed2e2d364e8e2f.scope - libcontainer container 2deb081d84a64fa7dfefa3a4147d039f2b9bfba32cb66c1a23ed2e2d364e8e2f. Dec 13 01:32:59.819643 systemd[1]: Started cri-containerd-3af6dcf9e9d086e0febce46651dd299386a789c93c1e5db09643df46249e9e99.scope - libcontainer container 3af6dcf9e9d086e0febce46651dd299386a789c93c1e5db09643df46249e9e99. Dec 13 01:32:59.824212 systemd[1]: Started cri-containerd-fea44e41922ea167d8b4f86d54da55a005035a9c6e4457c1edf39eb415559c59.scope - libcontainer container fea44e41922ea167d8b4f86d54da55a005035a9c6e4457c1edf39eb415559c59. Dec 13 01:32:59.869506 containerd[1464]: time="2024-12-13T01:32:59.869458609Z" level=info msg="StartContainer for \"3af6dcf9e9d086e0febce46651dd299386a789c93c1e5db09643df46249e9e99\" returns successfully" Dec 13 01:32:59.870296 containerd[1464]: time="2024-12-13T01:32:59.869613760Z" level=info msg="StartContainer for \"2deb081d84a64fa7dfefa3a4147d039f2b9bfba32cb66c1a23ed2e2d364e8e2f\" returns successfully" Dec 13 01:32:59.874096 kubelet[2208]: E1213 01:32:59.873970 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810988a9020e00c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:32:57.428623372 +0000 UTC m=+0.564482567,LastTimestamp:2024-12-13 01:32:57.428623372 +0000 UTC m=+0.564482567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:32:59.879780 containerd[1464]: time="2024-12-13T01:32:59.879725523Z" level=info msg="StartContainer for \"fea44e41922ea167d8b4f86d54da55a005035a9c6e4457c1edf39eb415559c59\" returns successfully" Dec 13 01:33:00.463018 kubelet[2208]: E1213 01:33:00.462925 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:00.466035 kubelet[2208]: E1213 01:33:00.465939 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:00.467103 kubelet[2208]: E1213 01:33:00.466999 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:00.544314 kubelet[2208]: I1213 01:33:00.544261 2208 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:33:00.721187 kubelet[2208]: E1213 01:33:00.721036 2208 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:33:00.824136 kubelet[2208]: I1213 01:33:00.824043 2208 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:33:00.832026 kubelet[2208]: E1213 01:33:00.831985 2208 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:33:00.932928 kubelet[2208]: E1213 01:33:00.932887 2208 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:33:01.033498 kubelet[2208]: E1213 01:33:01.033413 2208 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:33:01.134025 kubelet[2208]: E1213 01:33:01.133981 2208 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:33:01.234454 kubelet[2208]: E1213 01:33:01.234413 2208 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:33:01.335011 kubelet[2208]: E1213 01:33:01.334885 2208 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:33:01.425672 kubelet[2208]: I1213 01:33:01.425640 2208 apiserver.go:52] "Watching apiserver" Dec 13 01:33:01.435391 kubelet[2208]: I1213 01:33:01.435368 2208 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:33:01.472211 kubelet[2208]: E1213 01:33:01.472178 2208 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:01.472547 kubelet[2208]: E1213 01:33:01.472526 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:02.789078 systemd[1]: Reloading requested from client PID 2491 ('systemctl') (unit session-7.scope)... Dec 13 01:33:02.789103 systemd[1]: Reloading... Dec 13 01:33:02.870882 zram_generator::config[2534]: No configuration found. Dec 13 01:33:02.976061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:03.067134 systemd[1]: Reloading finished in 277 ms. Dec 13 01:33:03.122467 kubelet[2208]: I1213 01:33:03.122401 2208 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:33:03.122580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:03.131545 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:33:03.131825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:03.131900 systemd[1]: kubelet.service: Consumed 1.011s CPU time, 118.6M memory peak, 0B memory swap peak. Dec 13 01:33:03.139348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:03.296928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:03.301822 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:33:03.346063 kubelet[2575]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:03.346063 kubelet[2575]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:33:03.346063 kubelet[2575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:03.346063 kubelet[2575]: I1213 01:33:03.346011 2575 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:33:03.350636 kubelet[2575]: I1213 01:33:03.350610 2575 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:33:03.350795 kubelet[2575]: I1213 01:33:03.350720 2575 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:33:03.351022 kubelet[2575]: I1213 01:33:03.350986 2575 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:33:03.354563 kubelet[2575]: I1213 01:33:03.354536 2575 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:33:03.355570 kubelet[2575]: I1213 01:33:03.355537 2575 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:33:03.362747 kubelet[2575]: I1213 01:33:03.362725 2575 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:33:03.362995 kubelet[2575]: I1213 01:33:03.362953 2575 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:33:03.363171 kubelet[2575]: I1213 01:33:03.362985 2575 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:33:03.363246 kubelet[2575]: I1213 01:33:03.363183 2575 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:33:03.363246 kubelet[2575]: I1213 01:33:03.363193 2575 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:33:03.363246 kubelet[2575]: I1213 01:33:03.363242 2575 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:03.363351 kubelet[2575]: I1213 01:33:03.363333 2575 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:33:03.363351 kubelet[2575]: I1213 01:33:03.363347 2575 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:33:03.363409 kubelet[2575]: I1213 01:33:03.363367 2575 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:33:03.363409 kubelet[2575]: I1213 01:33:03.363388 2575 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:33:03.364444 kubelet[2575]: I1213 01:33:03.364405 2575 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:33:03.364698 kubelet[2575]: I1213 01:33:03.364615 2575 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:33:03.365147 kubelet[2575]: I1213 01:33:03.365125 2575 server.go:1264] "Started kubelet" Dec 13 01:33:03.367609 kubelet[2575]: I1213 01:33:03.367590 2575 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:33:03.368257 kubelet[2575]: I1213 01:33:03.368232 2575 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:33:03.371857 kubelet[2575]: I1213 01:33:03.369551 2575 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:33:03.371857 kubelet[2575]: I1213 01:33:03.370636 2575 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:33:03.371857 kubelet[2575]: I1213 01:33:03.370921 2575 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:33:03.373503 kubelet[2575]: I1213 01:33:03.373411 2575 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:33:03.373566 kubelet[2575]: I1213 01:33:03.373552 2575 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:33:03.375225 kubelet[2575]: I1213 01:33:03.374607 2575 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:33:03.378307 kubelet[2575]: E1213 01:33:03.377645 2575 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:33:03.380006 kubelet[2575]: I1213 01:33:03.379432 2575 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:33:03.380006 kubelet[2575]: I1213 01:33:03.379507 2575 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:33:03.381526 kubelet[2575]: I1213 01:33:03.381504 2575 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:33:03.385427 kubelet[2575]: I1213 01:33:03.385390 2575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:33:03.387248 kubelet[2575]: I1213 01:33:03.387214 2575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:33:03.387248 kubelet[2575]: I1213 01:33:03.387247 2575 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:33:03.387325 kubelet[2575]: I1213 01:33:03.387264 2575 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:33:03.387325 kubelet[2575]: E1213 01:33:03.387305 2575 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:33:03.419570 kubelet[2575]: I1213 01:33:03.419520 2575 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:33:03.419570 kubelet[2575]: I1213 01:33:03.419541 2575 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:33:03.419570 kubelet[2575]: I1213 01:33:03.419560 2575 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:03.419760 kubelet[2575]: I1213 01:33:03.419702 2575 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:33:03.419760 kubelet[2575]: I1213 01:33:03.419712 2575 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:33:03.419760 kubelet[2575]: I1213 01:33:03.419731 2575 policy_none.go:49] "None policy: Start" Dec 13 01:33:03.420468 kubelet[2575]: I1213 01:33:03.420436 2575 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:33:03.420468 kubelet[2575]: I1213 01:33:03.420469 2575 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:33:03.420690 kubelet[2575]: I1213 01:33:03.420670 2575 state_mem.go:75] "Updated machine memory state" Dec 13 01:33:03.425440 kubelet[2575]: I1213 01:33:03.425417 2575 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:33:03.425903 kubelet[2575]: I1213 01:33:03.425625 2575 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:33:03.425903 kubelet[2575]: I1213 01:33:03.425772 2575 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:33:03.478218 kubelet[2575]: I1213 01:33:03.478179 2575 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:33:03.485102 kubelet[2575]: I1213 01:33:03.485066 2575 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:33:03.485172 kubelet[2575]: I1213 01:33:03.485158 2575 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:33:03.487635 kubelet[2575]: I1213 01:33:03.487585 2575 topology_manager.go:215] "Topology Admit Handler" podUID="21befe9c7564c6e773aa24f68dbf0432" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:33:03.487802 kubelet[2575]: I1213 01:33:03.487682 2575 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:33:03.487802 kubelet[2575]: I1213 01:33:03.487746 2575 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:33:03.575964 kubelet[2575]: I1213 01:33:03.575903 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21befe9c7564c6e773aa24f68dbf0432-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"21befe9c7564c6e773aa24f68dbf0432\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:03.575964 kubelet[2575]: I1213 01:33:03.575943 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21befe9c7564c6e773aa24f68dbf0432-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"21befe9c7564c6e773aa24f68dbf0432\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:03.575964 kubelet[2575]: I1213 01:33:03.575963 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:03.575964 kubelet[2575]: I1213 01:33:03.575981 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:03.576196 kubelet[2575]: I1213 01:33:03.576018 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:33:03.576196 kubelet[2575]: I1213 01:33:03.576042 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21befe9c7564c6e773aa24f68dbf0432-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"21befe9c7564c6e773aa24f68dbf0432\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:03.576196 kubelet[2575]: I1213 01:33:03.576059 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:03.576196 kubelet[2575]: I1213 01:33:03.576072 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:03.576196 kubelet[2575]: I1213 01:33:03.576085 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:03.793948 kubelet[2575]: E1213 01:33:03.793607 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:03.793948 kubelet[2575]: E1213 01:33:03.793821 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:03.793948 kubelet[2575]: E1213 01:33:03.793919 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:04.364723 kubelet[2575]: I1213 01:33:04.364678 2575 apiserver.go:52] "Watching apiserver" Dec 13 01:33:04.373780 kubelet[2575]: I1213 01:33:04.373721 2575 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:33:04.400739 kubelet[2575]: E1213 01:33:04.400691 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:04.401421 kubelet[2575]: E1213 01:33:04.401390 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:04.426089 kubelet[2575]: E1213 01:33:04.426035 2575 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:04.426478 kubelet[2575]: E1213 01:33:04.426455 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:04.468325 kubelet[2575]: I1213 01:33:04.468245 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4682256790000001 podStartE2EDuration="1.468225679s" podCreationTimestamp="2024-12-13 01:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:04.458113495 +0000 UTC m=+1.151775701" watchObservedRunningTime="2024-12-13 01:33:04.468225679 +0000 UTC m=+1.161887885" Dec 13 01:33:04.468652 kubelet[2575]: I1213 01:33:04.468370 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.468367084 podStartE2EDuration="1.468367084s" podCreationTimestamp="2024-12-13 01:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:04.468111505 +0000 UTC m=+1.161773711" watchObservedRunningTime="2024-12-13 01:33:04.468367084 +0000 UTC m=+1.162029290" Dec 13 01:33:05.402476 kubelet[2575]: E1213 01:33:05.402353 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:07.354818 kubelet[2575]: E1213 01:33:07.354772 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:07.887287 sudo[1644]: pam_unix(sudo:session): session closed for user root Dec 13 01:33:07.889560 sshd[1641]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:07.895020 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:51848.service: Deactivated successfully. Dec 13 01:33:07.897149 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:33:07.897353 systemd[1]: session-7.scope: Consumed 4.495s CPU time, 197.5M memory peak, 0B memory swap peak. Dec 13 01:33:07.897876 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:33:07.898699 systemd-logind[1445]: Removed session 7. Dec 13 01:33:10.219193 kubelet[2575]: E1213 01:33:10.219130 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:10.230373 kubelet[2575]: I1213 01:33:10.230310 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.230292817 podStartE2EDuration="7.230292817s" podCreationTimestamp="2024-12-13 01:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:04.485103922 +0000 UTC m=+1.178766128" watchObservedRunningTime="2024-12-13 01:33:10.230292817 +0000 UTC m=+6.923955023" Dec 13 01:33:10.409481 kubelet[2575]: E1213 01:33:10.409431 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:13.304882 kubelet[2575]: E1213 01:33:13.304823 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:13.412964 kubelet[2575]: E1213 01:33:13.412908 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:17.359243 kubelet[2575]: E1213 01:33:17.359114 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:18.111184 kubelet[2575]: I1213 01:33:18.111113 2575 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:33:18.111462 containerd[1464]: time="2024-12-13T01:33:18.111417022Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:33:18.112056 kubelet[2575]: I1213 01:33:18.111591 2575 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:33:18.170772 kubelet[2575]: I1213 01:33:18.170712 2575 topology_manager.go:215] "Topology Admit Handler" podUID="7a7c2099-ae78-432f-87ca-26cbfa61413f" podNamespace="kube-system" podName="kube-proxy-dl65r" Dec 13 01:33:18.171864 kubelet[2575]: I1213 01:33:18.171739 2575 topology_manager.go:215] "Topology Admit Handler" podUID="1836fe89-aae6-46a1-867b-f8d97687055c" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-4lzqv" Dec 13 01:33:18.189242 systemd[1]: Created slice kubepods-besteffort-pod7a7c2099_ae78_432f_87ca_26cbfa61413f.slice - libcontainer container kubepods-besteffort-pod7a7c2099_ae78_432f_87ca_26cbfa61413f.slice. Dec 13 01:33:18.204251 systemd[1]: Created slice kubepods-besteffort-pod1836fe89_aae6_46a1_867b_f8d97687055c.slice - libcontainer container kubepods-besteffort-pod1836fe89_aae6_46a1_867b_f8d97687055c.slice. Dec 13 01:33:18.268368 kubelet[2575]: I1213 01:33:18.268287 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a7c2099-ae78-432f-87ca-26cbfa61413f-kube-proxy\") pod \"kube-proxy-dl65r\" (UID: \"7a7c2099-ae78-432f-87ca-26cbfa61413f\") " pod="kube-system/kube-proxy-dl65r" Dec 13 01:33:18.268368 kubelet[2575]: I1213 01:33:18.268355 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a7c2099-ae78-432f-87ca-26cbfa61413f-xtables-lock\") pod \"kube-proxy-dl65r\" (UID: \"7a7c2099-ae78-432f-87ca-26cbfa61413f\") " pod="kube-system/kube-proxy-dl65r" Dec 13 01:33:18.268368 kubelet[2575]: I1213 01:33:18.268380 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89kq4\" (UniqueName: \"kubernetes.io/projected/7a7c2099-ae78-432f-87ca-26cbfa61413f-kube-api-access-89kq4\") pod \"kube-proxy-dl65r\" (UID: \"7a7c2099-ae78-432f-87ca-26cbfa61413f\") " pod="kube-system/kube-proxy-dl65r" Dec 13 01:33:18.268645 kubelet[2575]: I1213 01:33:18.268461 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1836fe89-aae6-46a1-867b-f8d97687055c-var-lib-calico\") pod \"tigera-operator-7bc55997bb-4lzqv\" (UID: \"1836fe89-aae6-46a1-867b-f8d97687055c\") " pod="tigera-operator/tigera-operator-7bc55997bb-4lzqv" Dec 13 01:33:18.268645 kubelet[2575]: I1213 01:33:18.268499 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a7c2099-ae78-432f-87ca-26cbfa61413f-lib-modules\") pod \"kube-proxy-dl65r\" (UID: \"7a7c2099-ae78-432f-87ca-26cbfa61413f\") " pod="kube-system/kube-proxy-dl65r" Dec 13 01:33:18.268645 kubelet[2575]: I1213 01:33:18.268518 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9b4v\" (UniqueName: \"kubernetes.io/projected/1836fe89-aae6-46a1-867b-f8d97687055c-kube-api-access-r9b4v\") pod \"tigera-operator-7bc55997bb-4lzqv\" (UID: \"1836fe89-aae6-46a1-867b-f8d97687055c\") " pod="tigera-operator/tigera-operator-7bc55997bb-4lzqv" Dec 13 01:33:18.502887 kubelet[2575]: E1213 01:33:18.502666 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:18.504583 containerd[1464]: time="2024-12-13T01:33:18.504531525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dl65r,Uid:7a7c2099-ae78-432f-87ca-26cbfa61413f,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:18.507856 containerd[1464]: time="2024-12-13T01:33:18.507716018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-4lzqv,Uid:1836fe89-aae6-46a1-867b-f8d97687055c,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:33:18.548966 containerd[1464]: time="2024-12-13T01:33:18.548680626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:18.549218 containerd[1464]: time="2024-12-13T01:33:18.549095115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:18.549385 containerd[1464]: time="2024-12-13T01:33:18.549267814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:18.549689 containerd[1464]: time="2024-12-13T01:33:18.549659219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:18.553965 containerd[1464]: time="2024-12-13T01:33:18.552069608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:18.553965 containerd[1464]: time="2024-12-13T01:33:18.552205416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:18.553965 containerd[1464]: time="2024-12-13T01:33:18.552221998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:18.553965 containerd[1464]: time="2024-12-13T01:33:18.552376381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:18.573279 systemd[1]: Started cri-containerd-15a966a918a33c37b9fa295d049dcee3e6f2d5f1bf653b3a90ef3339f2b774dc.scope - libcontainer container 15a966a918a33c37b9fa295d049dcee3e6f2d5f1bf653b3a90ef3339f2b774dc. Dec 13 01:33:18.578236 systemd[1]: Started cri-containerd-bc57f9b5893cd3f87a8c6b5f92728687106ee1f246b91a81bf4202f9baffddc9.scope - libcontainer container bc57f9b5893cd3f87a8c6b5f92728687106ee1f246b91a81bf4202f9baffddc9. Dec 13 01:33:18.607488 containerd[1464]: time="2024-12-13T01:33:18.607421246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dl65r,Uid:7a7c2099-ae78-432f-87ca-26cbfa61413f,Namespace:kube-system,Attempt:0,} returns sandbox id \"15a966a918a33c37b9fa295d049dcee3e6f2d5f1bf653b3a90ef3339f2b774dc\"" Dec 13 01:33:18.609446 kubelet[2575]: E1213 01:33:18.609396 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:18.614678 containerd[1464]: time="2024-12-13T01:33:18.614581748Z" level=info msg="CreateContainer within sandbox \"15a966a918a33c37b9fa295d049dcee3e6f2d5f1bf653b3a90ef3339f2b774dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:33:18.635246 containerd[1464]: time="2024-12-13T01:33:18.635187659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-4lzqv,Uid:1836fe89-aae6-46a1-867b-f8d97687055c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc57f9b5893cd3f87a8c6b5f92728687106ee1f246b91a81bf4202f9baffddc9\"" Dec 13 01:33:18.642218 containerd[1464]: time="2024-12-13T01:33:18.641942849Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:33:18.646876 containerd[1464]: time="2024-12-13T01:33:18.646788784Z" level=info msg="CreateContainer within sandbox \"15a966a918a33c37b9fa295d049dcee3e6f2d5f1bf653b3a90ef3339f2b774dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a622fa6bf62c16455356ed4b7cf9e4acde5ebe92837d1c0a4c489214452c7470\"" Dec 13 01:33:18.647807 containerd[1464]: time="2024-12-13T01:33:18.647763019Z" level=info msg="StartContainer for \"a622fa6bf62c16455356ed4b7cf9e4acde5ebe92837d1c0a4c489214452c7470\"" Dec 13 01:33:18.684100 systemd[1]: Started cri-containerd-a622fa6bf62c16455356ed4b7cf9e4acde5ebe92837d1c0a4c489214452c7470.scope - libcontainer container a622fa6bf62c16455356ed4b7cf9e4acde5ebe92837d1c0a4c489214452c7470. Dec 13 01:33:18.720178 containerd[1464]: time="2024-12-13T01:33:18.720115194Z" level=info msg="StartContainer for \"a622fa6bf62c16455356ed4b7cf9e4acde5ebe92837d1c0a4c489214452c7470\" returns successfully" Dec 13 01:33:19.166107 update_engine[1449]: I20241213 01:33:19.165931 1449 update_attempter.cc:509] Updating boot flags... Dec 13 01:33:19.192877 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2910) Dec 13 01:33:19.228862 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2913) Dec 13 01:33:19.424131 kubelet[2575]: E1213 01:33:19.424003 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:19.431733 kubelet[2575]: I1213 01:33:19.431662 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dl65r" podStartSLOduration=1.431643778 podStartE2EDuration="1.431643778s" podCreationTimestamp="2024-12-13 01:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:19.431572873 +0000 UTC m=+16.125235089" watchObservedRunningTime="2024-12-13 01:33:19.431643778 +0000 UTC m=+16.125305984" Dec 13 01:33:21.123568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304945689.mount: Deactivated successfully. Dec 13 01:33:21.429580 containerd[1464]: time="2024-12-13T01:33:21.429447910Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:21.430309 containerd[1464]: time="2024-12-13T01:33:21.430257757Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764325" Dec 13 01:33:21.431574 containerd[1464]: time="2024-12-13T01:33:21.431521106Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:21.435675 containerd[1464]: time="2024-12-13T01:33:21.435568592Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:21.436612 containerd[1464]: time="2024-12-13T01:33:21.436552780Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.794545358s" Dec 13 01:33:21.436612 containerd[1464]: time="2024-12-13T01:33:21.436609127Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:33:21.439434 containerd[1464]: time="2024-12-13T01:33:21.439379578Z" level=info msg="CreateContainer within sandbox \"bc57f9b5893cd3f87a8c6b5f92728687106ee1f246b91a81bf4202f9baffddc9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:33:21.453272 containerd[1464]: time="2024-12-13T01:33:21.453217753Z" level=info msg="CreateContainer within sandbox \"bc57f9b5893cd3f87a8c6b5f92728687106ee1f246b91a81bf4202f9baffddc9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7af65327f858311b3c0e92b5575ff86cfbe0b320e59c11f13969acbe544a79ec\"" Dec 13 01:33:21.454138 containerd[1464]: time="2024-12-13T01:33:21.454049233Z" level=info msg="StartContainer for \"7af65327f858311b3c0e92b5575ff86cfbe0b320e59c11f13969acbe544a79ec\"" Dec 13 01:33:21.488124 systemd[1]: Started cri-containerd-7af65327f858311b3c0e92b5575ff86cfbe0b320e59c11f13969acbe544a79ec.scope - libcontainer container 7af65327f858311b3c0e92b5575ff86cfbe0b320e59c11f13969acbe544a79ec. Dec 13 01:33:21.521255 containerd[1464]: time="2024-12-13T01:33:21.521199608Z" level=info msg="StartContainer for \"7af65327f858311b3c0e92b5575ff86cfbe0b320e59c11f13969acbe544a79ec\" returns successfully" Dec 13 01:33:23.403519 kubelet[2575]: I1213 01:33:23.403440 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-4lzqv" podStartSLOduration=2.605703815 podStartE2EDuration="5.403421366s" podCreationTimestamp="2024-12-13 01:33:18 +0000 UTC" firstStartedPulling="2024-12-13 01:33:18.640153182 +0000 UTC m=+15.333815388" lastFinishedPulling="2024-12-13 01:33:21.437870733 +0000 UTC m=+18.131532939" observedRunningTime="2024-12-13 01:33:22.441270284 +0000 UTC m=+19.134932510" watchObservedRunningTime="2024-12-13 01:33:23.403421366 +0000 UTC m=+20.097083572" Dec 13 01:33:24.520071 kubelet[2575]: I1213 01:33:24.519983 2575 topology_manager.go:215] "Topology Admit Handler" podUID="3b11c799-0dd5-4ce9-bc53-89b4f74acc84" podNamespace="calico-system" podName="calico-typha-f54cfcf9f-c5rt2" Dec 13 01:33:24.542431 systemd[1]: Created slice kubepods-besteffort-pod3b11c799_0dd5_4ce9_bc53_89b4f74acc84.slice - libcontainer container kubepods-besteffort-pod3b11c799_0dd5_4ce9_bc53_89b4f74acc84.slice. Dec 13 01:33:24.583568 kubelet[2575]: I1213 01:33:24.583502 2575 topology_manager.go:215] "Topology Admit Handler" podUID="ab4967a7-7b97-4238-a561-87014e4a3e44" podNamespace="calico-system" podName="calico-node-cjv7v" Dec 13 01:33:24.588448 kubelet[2575]: W1213 01:33:24.588401 2575 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Dec 13 01:33:24.588448 kubelet[2575]: E1213 01:33:24.588445 2575 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Dec 13 01:33:24.597705 systemd[1]: Created slice kubepods-besteffort-podab4967a7_7b97_4238_a561_87014e4a3e44.slice - libcontainer container kubepods-besteffort-podab4967a7_7b97_4238_a561_87014e4a3e44.slice. Dec 13 01:33:24.608348 kubelet[2575]: I1213 01:33:24.608303 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-cni-log-dir\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609542 kubelet[2575]: I1213 01:33:24.608580 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-xtables-lock\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609542 kubelet[2575]: I1213 01:33:24.608617 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-flexvol-driver-host\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609542 kubelet[2575]: I1213 01:33:24.608652 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab4967a7-7b97-4238-a561-87014e4a3e44-node-certs\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609542 kubelet[2575]: I1213 01:33:24.608676 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-lib-modules\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609542 kubelet[2575]: I1213 01:33:24.608726 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-policysync\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609731 kubelet[2575]: I1213 01:33:24.608754 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-var-run-calico\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609731 kubelet[2575]: I1213 01:33:24.608793 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrc42\" (UniqueName: \"kubernetes.io/projected/3b11c799-0dd5-4ce9-bc53-89b4f74acc84-kube-api-access-hrc42\") pod \"calico-typha-f54cfcf9f-c5rt2\" (UID: \"3b11c799-0dd5-4ce9-bc53-89b4f74acc84\") " pod="calico-system/calico-typha-f54cfcf9f-c5rt2" Dec 13 01:33:24.609731 kubelet[2575]: I1213 01:33:24.608818 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4967a7-7b97-4238-a561-87014e4a3e44-tigera-ca-bundle\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609731 kubelet[2575]: I1213 01:33:24.608866 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-var-lib-calico\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.609731 kubelet[2575]: I1213 01:33:24.608881 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-cni-bin-dir\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.610018 kubelet[2575]: I1213 01:33:24.608895 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab4967a7-7b97-4238-a561-87014e4a3e44-cni-net-dir\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.610018 kubelet[2575]: I1213 01:33:24.608920 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82zx2\" (UniqueName: \"kubernetes.io/projected/ab4967a7-7b97-4238-a561-87014e4a3e44-kube-api-access-82zx2\") pod \"calico-node-cjv7v\" (UID: \"ab4967a7-7b97-4238-a561-87014e4a3e44\") " pod="calico-system/calico-node-cjv7v" Dec 13 01:33:24.610018 kubelet[2575]: I1213 01:33:24.608934 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b11c799-0dd5-4ce9-bc53-89b4f74acc84-tigera-ca-bundle\") pod \"calico-typha-f54cfcf9f-c5rt2\" (UID: \"3b11c799-0dd5-4ce9-bc53-89b4f74acc84\") " pod="calico-system/calico-typha-f54cfcf9f-c5rt2" Dec 13 01:33:24.610018 kubelet[2575]: I1213 01:33:24.608946 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3b11c799-0dd5-4ce9-bc53-89b4f74acc84-typha-certs\") pod \"calico-typha-f54cfcf9f-c5rt2\" (UID: \"3b11c799-0dd5-4ce9-bc53-89b4f74acc84\") " pod="calico-system/calico-typha-f54cfcf9f-c5rt2" Dec 13 01:33:24.712529 kubelet[2575]: E1213 01:33:24.712485 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.712736 kubelet[2575]: W1213 01:33:24.712710 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.712868 kubelet[2575]: E1213 01:33:24.712827 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.713335 kubelet[2575]: E1213 01:33:24.713251 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.713335 kubelet[2575]: W1213 01:33:24.713268 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.713416 kubelet[2575]: E1213 01:33:24.713331 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.713905 kubelet[2575]: E1213 01:33:24.713868 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.713905 kubelet[2575]: W1213 01:33:24.713883 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.714207 kubelet[2575]: E1213 01:33:24.714043 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.714280 kubelet[2575]: E1213 01:33:24.714258 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.714280 kubelet[2575]: W1213 01:33:24.714277 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.714413 kubelet[2575]: E1213 01:33:24.714373 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.714766 kubelet[2575]: E1213 01:33:24.714728 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.714963 kubelet[2575]: W1213 01:33:24.714822 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.715065 kubelet[2575]: E1213 01:33:24.715038 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.715495 kubelet[2575]: E1213 01:33:24.715418 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.715495 kubelet[2575]: W1213 01:33:24.715429 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.715690 kubelet[2575]: E1213 01:33:24.715588 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.716109 kubelet[2575]: E1213 01:33:24.715984 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.716109 kubelet[2575]: W1213 01:33:24.715996 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.716244 kubelet[2575]: E1213 01:33:24.716193 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.716593 kubelet[2575]: E1213 01:33:24.716462 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.716593 kubelet[2575]: W1213 01:33:24.716473 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.716593 kubelet[2575]: E1213 01:33:24.716533 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.717413 kubelet[2575]: E1213 01:33:24.717273 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.717413 kubelet[2575]: W1213 01:33:24.717285 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.717519 kubelet[2575]: E1213 01:33:24.717500 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.717677 kubelet[2575]: E1213 01:33:24.717648 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.717677 kubelet[2575]: W1213 01:33:24.717659 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.717790 kubelet[2575]: E1213 01:33:24.717764 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.717946 kubelet[2575]: E1213 01:33:24.717924 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.718042 kubelet[2575]: W1213 01:33:24.717969 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.718042 kubelet[2575]: E1213 01:33:24.718025 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.718681 kubelet[2575]: E1213 01:33:24.718241 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.718681 kubelet[2575]: W1213 01:33:24.718252 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.718681 kubelet[2575]: E1213 01:33:24.718328 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.718681 kubelet[2575]: E1213 01:33:24.718473 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.718681 kubelet[2575]: W1213 01:33:24.718480 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.718681 kubelet[2575]: E1213 01:33:24.718550 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.718681 kubelet[2575]: E1213 01:33:24.718676 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.718681 kubelet[2575]: W1213 01:33:24.718682 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.718904 kubelet[2575]: E1213 01:33:24.718768 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.718980 kubelet[2575]: E1213 01:33:24.718964 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.718980 kubelet[2575]: W1213 01:33:24.718976 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.719070 kubelet[2575]: E1213 01:33:24.719055 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.719347 kubelet[2575]: E1213 01:33:24.719239 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.719347 kubelet[2575]: W1213 01:33:24.719254 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.719347 kubelet[2575]: E1213 01:33:24.719276 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.719539 kubelet[2575]: E1213 01:33:24.719516 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.719565 kubelet[2575]: W1213 01:33:24.719538 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.719597 kubelet[2575]: E1213 01:33:24.719577 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.719879 kubelet[2575]: E1213 01:33:24.719859 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.719879 kubelet[2575]: W1213 01:33:24.719877 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.719938 kubelet[2575]: E1213 01:33:24.719915 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.720371 kubelet[2575]: E1213 01:33:24.720207 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.720371 kubelet[2575]: W1213 01:33:24.720220 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.720439 kubelet[2575]: E1213 01:33:24.720380 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.724558 kubelet[2575]: E1213 01:33:24.724528 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.725024 kubelet[2575]: W1213 01:33:24.724986 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.725280 kubelet[2575]: E1213 01:33:24.725263 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.725782 kubelet[2575]: E1213 01:33:24.725770 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.726166 kubelet[2575]: W1213 01:33:24.726152 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.726370 kubelet[2575]: E1213 01:33:24.726309 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.727670 kubelet[2575]: E1213 01:33:24.727192 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.727670 kubelet[2575]: W1213 01:33:24.727203 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.727670 kubelet[2575]: E1213 01:33:24.727262 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.727670 kubelet[2575]: E1213 01:33:24.727446 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.727670 kubelet[2575]: W1213 01:33:24.727454 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.727670 kubelet[2575]: E1213 01:33:24.727509 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.727896 kubelet[2575]: E1213 01:33:24.727709 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.727896 kubelet[2575]: W1213 01:33:24.727716 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.727896 kubelet[2575]: E1213 01:33:24.727806 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.728118 kubelet[2575]: E1213 01:33:24.728089 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.728118 kubelet[2575]: W1213 01:33:24.728109 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.728208 kubelet[2575]: E1213 01:33:24.728182 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.728374 kubelet[2575]: E1213 01:33:24.728326 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.728374 kubelet[2575]: W1213 01:33:24.728353 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.728536 kubelet[2575]: E1213 01:33:24.728428 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.728570 kubelet[2575]: E1213 01:33:24.728565 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.728767 kubelet[2575]: W1213 01:33:24.728572 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.728767 kubelet[2575]: E1213 01:33:24.728672 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.729271 kubelet[2575]: E1213 01:33:24.728815 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.729271 kubelet[2575]: W1213 01:33:24.728822 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.729271 kubelet[2575]: E1213 01:33:24.728894 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.729271 kubelet[2575]: E1213 01:33:24.729090 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.729271 kubelet[2575]: W1213 01:33:24.729109 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.729271 kubelet[2575]: E1213 01:33:24.729249 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.730996 kubelet[2575]: E1213 01:33:24.730297 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.730996 kubelet[2575]: W1213 01:33:24.730349 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.730996 kubelet[2575]: I1213 01:33:24.730534 2575 topology_manager.go:215] "Topology Admit Handler" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" podNamespace="calico-system" podName="csi-node-driver-mp76n" Dec 13 01:33:24.730996 kubelet[2575]: E1213 01:33:24.730866 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:24.731114 kubelet[2575]: E1213 01:33:24.731092 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.731410 kubelet[2575]: E1213 01:33:24.731276 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.731410 kubelet[2575]: W1213 01:33:24.731288 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.731410 kubelet[2575]: E1213 01:33:24.731380 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.731609 kubelet[2575]: E1213 01:33:24.731555 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.731609 kubelet[2575]: W1213 01:33:24.731563 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.731684 kubelet[2575]: E1213 01:33:24.731661 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.731876 kubelet[2575]: E1213 01:33:24.731786 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.731876 kubelet[2575]: W1213 01:33:24.731796 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.731876 kubelet[2575]: E1213 01:33:24.731878 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.732067 kubelet[2575]: E1213 01:33:24.732045 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.732067 kubelet[2575]: W1213 01:33:24.732058 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.733983 kubelet[2575]: E1213 01:33:24.732138 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.733983 kubelet[2575]: E1213 01:33:24.733184 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.733983 kubelet[2575]: W1213 01:33:24.733194 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.733983 kubelet[2575]: E1213 01:33:24.733203 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.733983 kubelet[2575]: E1213 01:33:24.733458 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.733983 kubelet[2575]: W1213 01:33:24.733466 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.733983 kubelet[2575]: E1213 01:33:24.733474 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.737956 kubelet[2575]: E1213 01:33:24.737922 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.737956 kubelet[2575]: W1213 01:33:24.737942 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.737956 kubelet[2575]: E1213 01:33:24.737958 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.740748 kubelet[2575]: E1213 01:33:24.740698 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.740748 kubelet[2575]: W1213 01:33:24.740737 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.740871 kubelet[2575]: E1213 01:33:24.740767 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.801676 kubelet[2575]: E1213 01:33:24.801549 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.801676 kubelet[2575]: W1213 01:33:24.801579 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.801676 kubelet[2575]: E1213 01:33:24.801602 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.802704 kubelet[2575]: E1213 01:33:24.801994 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.802704 kubelet[2575]: W1213 01:33:24.802016 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.802704 kubelet[2575]: E1213 01:33:24.802028 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.802704 kubelet[2575]: E1213 01:33:24.802567 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.802704 kubelet[2575]: W1213 01:33:24.802576 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.802704 kubelet[2575]: E1213 01:33:24.802586 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.803019 kubelet[2575]: E1213 01:33:24.802868 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.803019 kubelet[2575]: W1213 01:33:24.802877 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.803019 kubelet[2575]: E1213 01:33:24.802888 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.803179 kubelet[2575]: E1213 01:33:24.803126 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.803179 kubelet[2575]: W1213 01:33:24.803136 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.803179 kubelet[2575]: E1213 01:33:24.803145 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.803359 kubelet[2575]: E1213 01:33:24.803339 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.803359 kubelet[2575]: W1213 01:33:24.803353 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.803359 kubelet[2575]: E1213 01:33:24.803360 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.803600 kubelet[2575]: E1213 01:33:24.803582 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.803600 kubelet[2575]: W1213 01:33:24.803595 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.803647 kubelet[2575]: E1213 01:33:24.803606 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.803962 kubelet[2575]: E1213 01:33:24.803926 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.804245 kubelet[2575]: W1213 01:33:24.804052 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.804245 kubelet[2575]: E1213 01:33:24.804085 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.804956 kubelet[2575]: E1213 01:33:24.804797 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.804956 kubelet[2575]: W1213 01:33:24.804813 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.804956 kubelet[2575]: E1213 01:33:24.804883 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.805301 kubelet[2575]: E1213 01:33:24.805115 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.805301 kubelet[2575]: W1213 01:33:24.805124 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.805301 kubelet[2575]: E1213 01:33:24.805134 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.805413 kubelet[2575]: E1213 01:33:24.805396 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.805413 kubelet[2575]: W1213 01:33:24.805406 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.805485 kubelet[2575]: E1213 01:33:24.805415 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.805821 kubelet[2575]: E1213 01:33:24.805799 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.806396 kubelet[2575]: W1213 01:33:24.806354 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.806470 kubelet[2575]: E1213 01:33:24.806425 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.806849 kubelet[2575]: E1213 01:33:24.806764 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.806849 kubelet[2575]: W1213 01:33:24.806777 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.806849 kubelet[2575]: E1213 01:33:24.806787 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.807284 kubelet[2575]: E1213 01:33:24.807264 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.807284 kubelet[2575]: W1213 01:33:24.807277 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.807284 kubelet[2575]: E1213 01:33:24.807286 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.807599 kubelet[2575]: E1213 01:33:24.807581 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.807599 kubelet[2575]: W1213 01:33:24.807594 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.807690 kubelet[2575]: E1213 01:33:24.807603 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.807980 kubelet[2575]: E1213 01:33:24.807960 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.807980 kubelet[2575]: W1213 01:33:24.807972 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.807980 kubelet[2575]: E1213 01:33:24.807983 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.808284 kubelet[2575]: E1213 01:33:24.808257 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.808284 kubelet[2575]: W1213 01:33:24.808268 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.808284 kubelet[2575]: E1213 01:33:24.808277 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.808503 kubelet[2575]: E1213 01:33:24.808488 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.808503 kubelet[2575]: W1213 01:33:24.808499 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.808572 kubelet[2575]: E1213 01:33:24.808535 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.808930 kubelet[2575]: E1213 01:33:24.808889 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.808930 kubelet[2575]: W1213 01:33:24.808918 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.808993 kubelet[2575]: E1213 01:33:24.808946 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.809227 kubelet[2575]: E1213 01:33:24.809211 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.809227 kubelet[2575]: W1213 01:33:24.809222 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.809294 kubelet[2575]: E1213 01:33:24.809231 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.810526 kubelet[2575]: E1213 01:33:24.810502 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.810526 kubelet[2575]: W1213 01:33:24.810521 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.810871 kubelet[2575]: E1213 01:33:24.810539 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.810871 kubelet[2575]: I1213 01:33:24.810568 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2aa326b4-c51a-4e10-93c9-213b40c6cdc7-registration-dir\") pod \"csi-node-driver-mp76n\" (UID: \"2aa326b4-c51a-4e10-93c9-213b40c6cdc7\") " pod="calico-system/csi-node-driver-mp76n" Dec 13 01:33:24.810871 kubelet[2575]: E1213 01:33:24.810801 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.810871 kubelet[2575]: W1213 01:33:24.810811 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.811006 kubelet[2575]: E1213 01:33:24.810826 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.811006 kubelet[2575]: I1213 01:33:24.810907 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2aa326b4-c51a-4e10-93c9-213b40c6cdc7-socket-dir\") pod \"csi-node-driver-mp76n\" (UID: \"2aa326b4-c51a-4e10-93c9-213b40c6cdc7\") " pod="calico-system/csi-node-driver-mp76n" Dec 13 01:33:24.811167 kubelet[2575]: E1213 01:33:24.811146 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.811167 kubelet[2575]: W1213 01:33:24.811159 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.811257 kubelet[2575]: E1213 01:33:24.811176 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.811478 kubelet[2575]: E1213 01:33:24.811462 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.811478 kubelet[2575]: W1213 01:33:24.811474 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.811520 kubelet[2575]: E1213 01:33:24.811488 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.811520 kubelet[2575]: I1213 01:33:24.811508 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2aa326b4-c51a-4e10-93c9-213b40c6cdc7-varrun\") pod \"csi-node-driver-mp76n\" (UID: \"2aa326b4-c51a-4e10-93c9-213b40c6cdc7\") " pod="calico-system/csi-node-driver-mp76n" Dec 13 01:33:24.811760 kubelet[2575]: E1213 01:33:24.811739 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.811760 kubelet[2575]: W1213 01:33:24.811756 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.811823 kubelet[2575]: E1213 01:33:24.811773 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.811823 kubelet[2575]: I1213 01:33:24.811792 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2aa326b4-c51a-4e10-93c9-213b40c6cdc7-kubelet-dir\") pod \"csi-node-driver-mp76n\" (UID: \"2aa326b4-c51a-4e10-93c9-213b40c6cdc7\") " pod="calico-system/csi-node-driver-mp76n" Dec 13 01:33:24.812074 kubelet[2575]: E1213 01:33:24.812054 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.812074 kubelet[2575]: W1213 01:33:24.812071 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.812246 kubelet[2575]: E1213 01:33:24.812220 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.812329 kubelet[2575]: I1213 01:33:24.812281 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqn2b\" (UniqueName: \"kubernetes.io/projected/2aa326b4-c51a-4e10-93c9-213b40c6cdc7-kube-api-access-cqn2b\") pod \"csi-node-driver-mp76n\" (UID: \"2aa326b4-c51a-4e10-93c9-213b40c6cdc7\") " pod="calico-system/csi-node-driver-mp76n" Dec 13 01:33:24.812774 kubelet[2575]: E1213 01:33:24.812753 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.812774 kubelet[2575]: W1213 01:33:24.812768 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.812883 kubelet[2575]: E1213 01:33:24.812807 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.813122 kubelet[2575]: E1213 01:33:24.813104 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.813122 kubelet[2575]: W1213 01:33:24.813117 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.813203 kubelet[2575]: E1213 01:33:24.813145 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.813349 kubelet[2575]: E1213 01:33:24.813331 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.813349 kubelet[2575]: W1213 01:33:24.813343 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.813405 kubelet[2575]: E1213 01:33:24.813368 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.813617 kubelet[2575]: E1213 01:33:24.813591 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.813617 kubelet[2575]: W1213 01:33:24.813606 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.813772 kubelet[2575]: E1213 01:33:24.813636 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.813928 kubelet[2575]: E1213 01:33:24.813911 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.813928 kubelet[2575]: W1213 01:33:24.813924 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.814038 kubelet[2575]: E1213 01:33:24.814020 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.814279 kubelet[2575]: E1213 01:33:24.814263 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.814279 kubelet[2575]: W1213 01:33:24.814275 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.814341 kubelet[2575]: E1213 01:33:24.814284 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.814517 kubelet[2575]: E1213 01:33:24.814501 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.814517 kubelet[2575]: W1213 01:33:24.814512 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.814575 kubelet[2575]: E1213 01:33:24.814520 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.814753 kubelet[2575]: E1213 01:33:24.814738 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.814753 kubelet[2575]: W1213 01:33:24.814749 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.814799 kubelet[2575]: E1213 01:33:24.814757 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.815001 kubelet[2575]: E1213 01:33:24.814984 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.815001 kubelet[2575]: W1213 01:33:24.814996 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.815059 kubelet[2575]: E1213 01:33:24.815004 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.815226 kubelet[2575]: E1213 01:33:24.815211 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.815226 kubelet[2575]: W1213 01:33:24.815222 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.815275 kubelet[2575]: E1213 01:33:24.815230 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.846088 kubelet[2575]: E1213 01:33:24.846024 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:24.846756 containerd[1464]: time="2024-12-13T01:33:24.846684152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f54cfcf9f-c5rt2,Uid:3b11c799-0dd5-4ce9-bc53-89b4f74acc84,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:24.873299 containerd[1464]: time="2024-12-13T01:33:24.873164805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:24.874131 containerd[1464]: time="2024-12-13T01:33:24.873899477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:24.874131 containerd[1464]: time="2024-12-13T01:33:24.873921699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:24.874131 containerd[1464]: time="2024-12-13T01:33:24.874018943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:24.898151 systemd[1]: Started cri-containerd-00904e0adcf5e49cb8f8b898279fe219f329a47d00c4df69b15379268e9cbb34.scope - libcontainer container 00904e0adcf5e49cb8f8b898279fe219f329a47d00c4df69b15379268e9cbb34. Dec 13 01:33:24.914302 kubelet[2575]: E1213 01:33:24.914265 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.914302 kubelet[2575]: W1213 01:33:24.914332 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.914302 kubelet[2575]: E1213 01:33:24.914358 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.914804 kubelet[2575]: E1213 01:33:24.914774 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.914904 kubelet[2575]: W1213 01:33:24.914809 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.914904 kubelet[2575]: E1213 01:33:24.914865 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.915302 kubelet[2575]: E1213 01:33:24.915267 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.915334 kubelet[2575]: W1213 01:33:24.915318 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.915365 kubelet[2575]: E1213 01:33:24.915340 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.915722 kubelet[2575]: E1213 01:33:24.915692 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.915722 kubelet[2575]: W1213 01:33:24.915710 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.915961 kubelet[2575]: E1213 01:33:24.915940 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.916244 kubelet[2575]: E1213 01:33:24.916225 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.916244 kubelet[2575]: W1213 01:33:24.916242 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.916352 kubelet[2575]: E1213 01:33:24.916329 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.916571 kubelet[2575]: E1213 01:33:24.916551 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.916571 kubelet[2575]: W1213 01:33:24.916569 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.916629 kubelet[2575]: E1213 01:33:24.916601 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.916884 kubelet[2575]: E1213 01:33:24.916864 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.916884 kubelet[2575]: W1213 01:33:24.916880 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.916947 kubelet[2575]: E1213 01:33:24.916909 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.917249 kubelet[2575]: E1213 01:33:24.917230 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.917249 kubelet[2575]: W1213 01:33:24.917247 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.917300 kubelet[2575]: E1213 01:33:24.917279 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.917596 kubelet[2575]: E1213 01:33:24.917557 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.917596 kubelet[2575]: W1213 01:33:24.917574 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.917657 kubelet[2575]: E1213 01:33:24.917610 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.917925 kubelet[2575]: E1213 01:33:24.917904 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.917925 kubelet[2575]: W1213 01:33:24.917921 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.917976 kubelet[2575]: E1213 01:33:24.917959 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.918222 kubelet[2575]: E1213 01:33:24.918205 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.918222 kubelet[2575]: W1213 01:33:24.918219 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.918305 kubelet[2575]: E1213 01:33:24.918291 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.918507 kubelet[2575]: E1213 01:33:24.918488 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.918507 kubelet[2575]: W1213 01:33:24.918504 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.918564 kubelet[2575]: E1213 01:33:24.918540 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.918746 kubelet[2575]: E1213 01:33:24.918729 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.918746 kubelet[2575]: W1213 01:33:24.918744 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.918807 kubelet[2575]: E1213 01:33:24.918779 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.919016 kubelet[2575]: E1213 01:33:24.918998 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.919016 kubelet[2575]: W1213 01:33:24.919013 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.919066 kubelet[2575]: E1213 01:33:24.919050 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.919265 kubelet[2575]: E1213 01:33:24.919247 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.919265 kubelet[2575]: W1213 01:33:24.919262 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.919441 kubelet[2575]: E1213 01:33:24.919366 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.919505 kubelet[2575]: E1213 01:33:24.919489 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.919534 kubelet[2575]: W1213 01:33:24.919504 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.919694 kubelet[2575]: E1213 01:33:24.919620 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.919756 kubelet[2575]: E1213 01:33:24.919739 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.919756 kubelet[2575]: W1213 01:33:24.919754 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.919882 kubelet[2575]: E1213 01:33:24.919862 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.920128 kubelet[2575]: E1213 01:33:24.920090 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.920128 kubelet[2575]: W1213 01:33:24.920119 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.920200 kubelet[2575]: E1213 01:33:24.920180 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.920441 kubelet[2575]: E1213 01:33:24.920414 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.920441 kubelet[2575]: W1213 01:33:24.920435 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.920586 kubelet[2575]: E1213 01:33:24.920564 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.921370 kubelet[2575]: E1213 01:33:24.921293 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.921370 kubelet[2575]: W1213 01:33:24.921310 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.921370 kubelet[2575]: E1213 01:33:24.921329 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.921841 kubelet[2575]: E1213 01:33:24.921805 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.921917 kubelet[2575]: W1213 01:33:24.921823 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.921988 kubelet[2575]: E1213 01:33:24.921969 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.924057 kubelet[2575]: E1213 01:33:24.924006 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.924057 kubelet[2575]: W1213 01:33:24.924031 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.924140 kubelet[2575]: E1213 01:33:24.924073 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.924915 kubelet[2575]: E1213 01:33:24.924784 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.924915 kubelet[2575]: W1213 01:33:24.924812 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.924915 kubelet[2575]: E1213 01:33:24.924878 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.925299 kubelet[2575]: E1213 01:33:24.925265 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.925299 kubelet[2575]: W1213 01:33:24.925284 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.925446 kubelet[2575]: E1213 01:33:24.925387 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.925727 kubelet[2575]: E1213 01:33:24.925710 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.925900 kubelet[2575]: W1213 01:33:24.925772 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.925900 kubelet[2575]: E1213 01:33:24.925826 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.926260 kubelet[2575]: E1213 01:33:24.926247 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.927024 kubelet[2575]: W1213 01:33:24.926544 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.927024 kubelet[2575]: E1213 01:33:24.926559 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.931046 kubelet[2575]: E1213 01:33:24.931014 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:24.931046 kubelet[2575]: W1213 01:33:24.931042 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:24.931177 kubelet[2575]: E1213 01:33:24.931063 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:24.940412 containerd[1464]: time="2024-12-13T01:33:24.940324260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f54cfcf9f-c5rt2,Uid:3b11c799-0dd5-4ce9-bc53-89b4f74acc84,Namespace:calico-system,Attempt:0,} returns sandbox id \"00904e0adcf5e49cb8f8b898279fe219f329a47d00c4df69b15379268e9cbb34\"" Dec 13 01:33:24.942128 kubelet[2575]: E1213 01:33:24.942075 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:24.943055 containerd[1464]: time="2024-12-13T01:33:24.943019757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:33:25.020905 kubelet[2575]: E1213 01:33:25.020861 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:25.020905 kubelet[2575]: W1213 01:33:25.020889 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:25.020905 kubelet[2575]: E1213 01:33:25.020910 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:25.121948 kubelet[2575]: E1213 01:33:25.121904 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:25.121948 kubelet[2575]: W1213 01:33:25.121937 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:25.122105 kubelet[2575]: E1213 01:33:25.121964 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:25.222775 kubelet[2575]: E1213 01:33:25.222708 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:25.222775 kubelet[2575]: W1213 01:33:25.222730 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:25.222775 kubelet[2575]: E1213 01:33:25.222750 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:25.323478 kubelet[2575]: E1213 01:33:25.323444 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:25.323478 kubelet[2575]: W1213 01:33:25.323462 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:25.323478 kubelet[2575]: E1213 01:33:25.323477 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:25.424940 kubelet[2575]: E1213 01:33:25.424784 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:25.424940 kubelet[2575]: W1213 01:33:25.424806 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:25.424940 kubelet[2575]: E1213 01:33:25.424825 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:25.526003 kubelet[2575]: E1213 01:33:25.525965 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:25.526003 kubelet[2575]: W1213 01:33:25.525983 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:25.526003 kubelet[2575]: E1213 01:33:25.526000 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:25.584435 kubelet[2575]: E1213 01:33:25.584391 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:25.584435 kubelet[2575]: W1213 01:33:25.584421 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:25.584435 kubelet[2575]: E1213 01:33:25.584448 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:25.804618 kubelet[2575]: E1213 01:33:25.804478 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:25.805326 containerd[1464]: time="2024-12-13T01:33:25.805041332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cjv7v,Uid:ab4967a7-7b97-4238-a561-87014e4a3e44,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:25.828413 containerd[1464]: time="2024-12-13T01:33:25.828217506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:25.828413 containerd[1464]: time="2024-12-13T01:33:25.828280836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:25.828413 containerd[1464]: time="2024-12-13T01:33:25.828294512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:25.828413 containerd[1464]: time="2024-12-13T01:33:25.828380485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:25.861988 systemd[1]: Started cri-containerd-2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb.scope - libcontainer container 2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb. Dec 13 01:33:25.883796 containerd[1464]: time="2024-12-13T01:33:25.883753601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cjv7v,Uid:ab4967a7-7b97-4238-a561-87014e4a3e44,Namespace:calico-system,Attempt:0,} returns sandbox id \"2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb\"" Dec 13 01:33:25.884775 kubelet[2575]: E1213 01:33:25.884433 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:26.391685 kubelet[2575]: E1213 01:33:26.390499 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:26.934915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228312992.mount: Deactivated successfully. Dec 13 01:33:27.846732 containerd[1464]: time="2024-12-13T01:33:27.846672450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.847495 containerd[1464]: time="2024-12-13T01:33:27.847433469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:33:27.848588 containerd[1464]: time="2024-12-13T01:33:27.848557364Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.850957 containerd[1464]: time="2024-12-13T01:33:27.850925723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.851532 containerd[1464]: time="2024-12-13T01:33:27.851483057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.908267098s" Dec 13 01:33:27.851576 containerd[1464]: time="2024-12-13T01:33:27.851533743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:33:27.858922 containerd[1464]: time="2024-12-13T01:33:27.858889795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:33:27.880344 containerd[1464]: time="2024-12-13T01:33:27.880296371Z" level=info msg="CreateContainer within sandbox \"00904e0adcf5e49cb8f8b898279fe219f329a47d00c4df69b15379268e9cbb34\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:33:27.895845 containerd[1464]: time="2024-12-13T01:33:27.895789794Z" level=info msg="CreateContainer within sandbox \"00904e0adcf5e49cb8f8b898279fe219f329a47d00c4df69b15379268e9cbb34\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c782cca575e0ee76be8a2a5566f45bea462c27a01938a2efb286d82204a532b\"" Dec 13 01:33:27.896616 containerd[1464]: time="2024-12-13T01:33:27.896571001Z" level=info msg="StartContainer for \"1c782cca575e0ee76be8a2a5566f45bea462c27a01938a2efb286d82204a532b\"" Dec 13 01:33:27.922170 systemd[1]: run-containerd-runc-k8s.io-1c782cca575e0ee76be8a2a5566f45bea462c27a01938a2efb286d82204a532b-runc.qGLjxW.mount: Deactivated successfully. Dec 13 01:33:27.934021 systemd[1]: Started cri-containerd-1c782cca575e0ee76be8a2a5566f45bea462c27a01938a2efb286d82204a532b.scope - libcontainer container 1c782cca575e0ee76be8a2a5566f45bea462c27a01938a2efb286d82204a532b. Dec 13 01:33:28.048000 containerd[1464]: time="2024-12-13T01:33:28.047944412Z" level=info msg="StartContainer for \"1c782cca575e0ee76be8a2a5566f45bea462c27a01938a2efb286d82204a532b\" returns successfully" Dec 13 01:33:28.389942 kubelet[2575]: E1213 01:33:28.389862 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:28.447217 kubelet[2575]: E1213 01:33:28.447172 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:28.533403 kubelet[2575]: E1213 01:33:28.533364 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.533403 kubelet[2575]: W1213 01:33:28.533390 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.533403 kubelet[2575]: E1213 01:33:28.533412 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.533686 kubelet[2575]: E1213 01:33:28.533673 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.533686 kubelet[2575]: W1213 01:33:28.533685 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.533741 kubelet[2575]: E1213 01:33:28.533693 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.534041 kubelet[2575]: E1213 01:33:28.534026 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.534089 kubelet[2575]: W1213 01:33:28.534049 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.534089 kubelet[2575]: E1213 01:33:28.534067 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.534357 kubelet[2575]: E1213 01:33:28.534344 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.534357 kubelet[2575]: W1213 01:33:28.534356 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.534404 kubelet[2575]: E1213 01:33:28.534364 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.534585 kubelet[2575]: E1213 01:33:28.534574 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.534585 kubelet[2575]: W1213 01:33:28.534583 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.534637 kubelet[2575]: E1213 01:33:28.534591 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.534782 kubelet[2575]: E1213 01:33:28.534771 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.534782 kubelet[2575]: W1213 01:33:28.534780 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.534849 kubelet[2575]: E1213 01:33:28.534787 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.535006 kubelet[2575]: E1213 01:33:28.534994 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.535006 kubelet[2575]: W1213 01:33:28.535004 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.535065 kubelet[2575]: E1213 01:33:28.535012 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.535214 kubelet[2575]: E1213 01:33:28.535202 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.535214 kubelet[2575]: W1213 01:33:28.535211 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.535265 kubelet[2575]: E1213 01:33:28.535219 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.535420 kubelet[2575]: E1213 01:33:28.535409 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.535420 kubelet[2575]: W1213 01:33:28.535418 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.535471 kubelet[2575]: E1213 01:33:28.535425 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.535632 kubelet[2575]: E1213 01:33:28.535620 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.535632 kubelet[2575]: W1213 01:33:28.535630 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.535678 kubelet[2575]: E1213 01:33:28.535638 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.535893 kubelet[2575]: E1213 01:33:28.535875 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.535893 kubelet[2575]: W1213 01:33:28.535886 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.535946 kubelet[2575]: E1213 01:33:28.535896 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.536138 kubelet[2575]: E1213 01:33:28.536122 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.536138 kubelet[2575]: W1213 01:33:28.536132 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.536191 kubelet[2575]: E1213 01:33:28.536140 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.536341 kubelet[2575]: E1213 01:33:28.536325 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.536341 kubelet[2575]: W1213 01:33:28.536335 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.536394 kubelet[2575]: E1213 01:33:28.536343 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.536556 kubelet[2575]: E1213 01:33:28.536540 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.536556 kubelet[2575]: W1213 01:33:28.536550 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.536614 kubelet[2575]: E1213 01:33:28.536558 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.536762 kubelet[2575]: E1213 01:33:28.536747 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.536762 kubelet[2575]: W1213 01:33:28.536755 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.536813 kubelet[2575]: E1213 01:33:28.536763 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.544266 kubelet[2575]: E1213 01:33:28.544232 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.544266 kubelet[2575]: W1213 01:33:28.544261 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.544342 kubelet[2575]: E1213 01:33:28.544285 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.544561 kubelet[2575]: E1213 01:33:28.544541 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.544561 kubelet[2575]: W1213 01:33:28.544552 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.544616 kubelet[2575]: E1213 01:33:28.544569 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.544813 kubelet[2575]: E1213 01:33:28.544796 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.544813 kubelet[2575]: W1213 01:33:28.544808 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.544931 kubelet[2575]: E1213 01:33:28.544822 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.545164 kubelet[2575]: E1213 01:33:28.545122 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.545164 kubelet[2575]: W1213 01:33:28.545138 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.545164 kubelet[2575]: E1213 01:33:28.545152 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.545410 kubelet[2575]: E1213 01:33:28.545391 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.545410 kubelet[2575]: W1213 01:33:28.545402 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.545474 kubelet[2575]: E1213 01:33:28.545414 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.545659 kubelet[2575]: E1213 01:33:28.545633 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.545659 kubelet[2575]: W1213 01:33:28.545644 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.545725 kubelet[2575]: E1213 01:33:28.545677 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.545883 kubelet[2575]: E1213 01:33:28.545865 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.545883 kubelet[2575]: W1213 01:33:28.545877 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.545963 kubelet[2575]: E1213 01:33:28.545942 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.546137 kubelet[2575]: E1213 01:33:28.546120 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.546137 kubelet[2575]: W1213 01:33:28.546131 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.546187 kubelet[2575]: E1213 01:33:28.546159 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.546369 kubelet[2575]: E1213 01:33:28.546349 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.546369 kubelet[2575]: W1213 01:33:28.546359 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.546432 kubelet[2575]: E1213 01:33:28.546373 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.546709 kubelet[2575]: E1213 01:33:28.546690 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.546709 kubelet[2575]: W1213 01:33:28.546707 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.546765 kubelet[2575]: E1213 01:33:28.546725 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.546964 kubelet[2575]: E1213 01:33:28.546951 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.547000 kubelet[2575]: W1213 01:33:28.546963 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.547000 kubelet[2575]: E1213 01:33:28.546978 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.547223 kubelet[2575]: E1213 01:33:28.547209 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.547223 kubelet[2575]: W1213 01:33:28.547221 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.547277 kubelet[2575]: E1213 01:33:28.547236 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.547497 kubelet[2575]: E1213 01:33:28.547468 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.547497 kubelet[2575]: W1213 01:33:28.547482 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.547497 kubelet[2575]: E1213 01:33:28.547498 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.547758 kubelet[2575]: E1213 01:33:28.547743 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.547786 kubelet[2575]: W1213 01:33:28.547757 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.547786 kubelet[2575]: E1213 01:33:28.547772 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.548031 kubelet[2575]: E1213 01:33:28.548014 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.548081 kubelet[2575]: W1213 01:33:28.548031 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.548081 kubelet[2575]: E1213 01:33:28.548047 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.548298 kubelet[2575]: E1213 01:33:28.548283 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.548298 kubelet[2575]: W1213 01:33:28.548295 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.548357 kubelet[2575]: E1213 01:33:28.548332 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.548533 kubelet[2575]: E1213 01:33:28.548519 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.548533 kubelet[2575]: W1213 01:33:28.548531 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.548590 kubelet[2575]: E1213 01:33:28.548541 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:28.548873 kubelet[2575]: E1213 01:33:28.548856 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:28.548873 kubelet[2575]: W1213 01:33:28.548869 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:28.548928 kubelet[2575]: E1213 01:33:28.548878 2575 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:29.104167 containerd[1464]: time="2024-12-13T01:33:29.104096165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:29.104920 containerd[1464]: time="2024-12-13T01:33:29.104880727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:33:29.106037 containerd[1464]: time="2024-12-13T01:33:29.106008307Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:29.108259 containerd[1464]: time="2024-12-13T01:33:29.108204287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:29.108746 containerd[1464]: time="2024-12-13T01:33:29.108702508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.249778016s" Dec 13 01:33:29.108746 containerd[1464]: time="2024-12-13T01:33:29.108740920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:33:29.111203 containerd[1464]: time="2024-12-13T01:33:29.111159590Z" level=info msg="CreateContainer within sandbox \"2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:33:29.127635 containerd[1464]: time="2024-12-13T01:33:29.127582919Z" level=info msg="CreateContainer within sandbox \"2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf\"" Dec 13 01:33:29.128223 containerd[1464]: time="2024-12-13T01:33:29.128183514Z" level=info msg="StartContainer for \"e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf\"" Dec 13 01:33:29.171075 systemd[1]: Started cri-containerd-e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf.scope - libcontainer container e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf. Dec 13 01:33:29.231567 systemd[1]: cri-containerd-e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf.scope: Deactivated successfully. Dec 13 01:33:29.283767 containerd[1464]: time="2024-12-13T01:33:29.283684068Z" level=info msg="StartContainer for \"e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf\" returns successfully" Dec 13 01:33:29.307315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf-rootfs.mount: Deactivated successfully. Dec 13 01:33:29.452350 kubelet[2575]: I1213 01:33:29.452309 2575 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:29.452939 kubelet[2575]: E1213 01:33:29.452625 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:29.453146 kubelet[2575]: E1213 01:33:29.453112 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:29.467310 kubelet[2575]: I1213 01:33:29.466483 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f54cfcf9f-c5rt2" podStartSLOduration=2.553759339 podStartE2EDuration="5.466463301s" podCreationTimestamp="2024-12-13 01:33:24 +0000 UTC" firstStartedPulling="2024-12-13 01:33:24.942632754 +0000 UTC m=+21.636294960" lastFinishedPulling="2024-12-13 01:33:27.855336716 +0000 UTC m=+24.548998922" observedRunningTime="2024-12-13 01:33:28.457921922 +0000 UTC m=+25.151584128" watchObservedRunningTime="2024-12-13 01:33:29.466463301 +0000 UTC m=+26.160125507" Dec 13 01:33:29.649023 containerd[1464]: time="2024-12-13T01:33:29.648958617Z" level=info msg="shim disconnected" id=e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf namespace=k8s.io Dec 13 01:33:29.649023 containerd[1464]: time="2024-12-13T01:33:29.649008501Z" level=warning msg="cleaning up after shim disconnected" id=e7659e02e267686d24d7c6a9850700cd442c5f57160e5cbe1269adb0af3d48bf namespace=k8s.io Dec 13 01:33:29.649023 containerd[1464]: time="2024-12-13T01:33:29.649017769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:30.388304 kubelet[2575]: E1213 01:33:30.388233 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:30.455352 kubelet[2575]: E1213 01:33:30.455312 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:30.456352 containerd[1464]: time="2024-12-13T01:33:30.456287211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:33:32.388592 kubelet[2575]: E1213 01:33:32.388535 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:34.388462 kubelet[2575]: E1213 01:33:34.388362 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:35.568199 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:43510.service - OpenSSH per-connection server daemon (10.0.0.1:43510). Dec 13 01:33:35.576348 containerd[1464]: time="2024-12-13T01:33:35.575279127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:35.576961 containerd[1464]: time="2024-12-13T01:33:35.576794012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:33:35.578799 containerd[1464]: time="2024-12-13T01:33:35.578751913Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:35.582047 containerd[1464]: time="2024-12-13T01:33:35.582010605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:35.583041 containerd[1464]: time="2024-12-13T01:33:35.583002744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.126642726s" Dec 13 01:33:35.583093 containerd[1464]: time="2024-12-13T01:33:35.583040847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:33:35.587139 containerd[1464]: time="2024-12-13T01:33:35.587099226Z" level=info msg="CreateContainer within sandbox \"2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:33:35.603528 containerd[1464]: time="2024-12-13T01:33:35.603472677Z" level=info msg="CreateContainer within sandbox \"2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0\"" Dec 13 01:33:35.603629 sshd[3344]: Accepted publickey for core from 10.0.0.1 port 43510 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:33:35.604436 containerd[1464]: time="2024-12-13T01:33:35.604408061Z" level=info msg="StartContainer for \"e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0\"" Dec 13 01:33:35.605482 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:35.609890 systemd-logind[1445]: New session 8 of user core. Dec 13 01:33:35.615977 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:33:35.642989 systemd[1]: Started cri-containerd-e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0.scope - libcontainer container e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0. Dec 13 01:33:35.673172 containerd[1464]: time="2024-12-13T01:33:35.673107409Z" level=info msg="StartContainer for \"e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0\" returns successfully" Dec 13 01:33:35.747397 sshd[3344]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:35.752809 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:43510.service: Deactivated successfully. Dec 13 01:33:35.755244 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:33:35.756050 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:33:35.757179 systemd-logind[1445]: Removed session 8. Dec 13 01:33:36.387576 kubelet[2575]: E1213 01:33:36.387518 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:36.470219 kubelet[2575]: E1213 01:33:36.469283 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:37.278709 containerd[1464]: time="2024-12-13T01:33:37.278635701Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:33:37.281869 systemd[1]: cri-containerd-e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0.scope: Deactivated successfully. Dec 13 01:33:37.307537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0-rootfs.mount: Deactivated successfully. Dec 13 01:33:37.321564 kubelet[2575]: I1213 01:33:37.321527 2575 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:33:37.344995 kubelet[2575]: I1213 01:33:37.344923 2575 topology_manager.go:215] "Topology Admit Handler" podUID="892b10bf-3a8d-4c3d-8649-291377d9695e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fmn8h" Dec 13 01:33:37.349940 kubelet[2575]: I1213 01:33:37.349903 2575 topology_manager.go:215] "Topology Admit Handler" podUID="04d42515-4a5b-418f-b47e-c07fd5f34d8b" podNamespace="calico-system" podName="calico-kube-controllers-5b567484f5-slbhn" Dec 13 01:33:37.350060 kubelet[2575]: I1213 01:33:37.350035 2575 topology_manager.go:215] "Topology Admit Handler" podUID="ede8f753-82f0-4f13-acfc-752baf14716b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-68jq8" Dec 13 01:33:37.352070 kubelet[2575]: I1213 01:33:37.352042 2575 topology_manager.go:215] "Topology Admit Handler" podUID="6f95fdeb-1056-4a6f-ba9e-df8029b239a1" podNamespace="calico-apiserver" podName="calico-apiserver-7759d578c8-pbvdn" Dec 13 01:33:37.353506 kubelet[2575]: I1213 01:33:37.353476 2575 topology_manager.go:215] "Topology Admit Handler" podUID="3a651f28-b895-466b-a4fa-253090684670" podNamespace="calico-apiserver" podName="calico-apiserver-7759d578c8-fj2gm" Dec 13 01:33:37.358431 systemd[1]: Created slice kubepods-burstable-pod892b10bf_3a8d_4c3d_8649_291377d9695e.slice - libcontainer container kubepods-burstable-pod892b10bf_3a8d_4c3d_8649_291377d9695e.slice. Dec 13 01:33:37.365104 systemd[1]: Created slice kubepods-burstable-podede8f753_82f0_4f13_acfc_752baf14716b.slice - libcontainer container kubepods-burstable-podede8f753_82f0_4f13_acfc_752baf14716b.slice. Dec 13 01:33:37.369989 systemd[1]: Created slice kubepods-besteffort-pod04d42515_4a5b_418f_b47e_c07fd5f34d8b.slice - libcontainer container kubepods-besteffort-pod04d42515_4a5b_418f_b47e_c07fd5f34d8b.slice. Dec 13 01:33:37.375198 systemd[1]: Created slice kubepods-besteffort-pod6f95fdeb_1056_4a6f_ba9e_df8029b239a1.slice - libcontainer container kubepods-besteffort-pod6f95fdeb_1056_4a6f_ba9e_df8029b239a1.slice. Dec 13 01:33:37.447259 kubelet[2575]: I1213 01:33:37.447030 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wzv\" (UniqueName: \"kubernetes.io/projected/04d42515-4a5b-418f-b47e-c07fd5f34d8b-kube-api-access-75wzv\") pod \"calico-kube-controllers-5b567484f5-slbhn\" (UID: \"04d42515-4a5b-418f-b47e-c07fd5f34d8b\") " pod="calico-system/calico-kube-controllers-5b567484f5-slbhn" Dec 13 01:33:37.447259 kubelet[2575]: I1213 01:33:37.447089 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78tg\" (UniqueName: \"kubernetes.io/projected/892b10bf-3a8d-4c3d-8649-291377d9695e-kube-api-access-v78tg\") pod \"coredns-7db6d8ff4d-fmn8h\" (UID: \"892b10bf-3a8d-4c3d-8649-291377d9695e\") " pod="kube-system/coredns-7db6d8ff4d-fmn8h" Dec 13 01:33:37.447259 kubelet[2575]: I1213 01:33:37.447111 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ede8f753-82f0-4f13-acfc-752baf14716b-config-volume\") pod \"coredns-7db6d8ff4d-68jq8\" (UID: \"ede8f753-82f0-4f13-acfc-752baf14716b\") " pod="kube-system/coredns-7db6d8ff4d-68jq8" Dec 13 01:33:37.447259 kubelet[2575]: I1213 01:33:37.447131 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d42515-4a5b-418f-b47e-c07fd5f34d8b-tigera-ca-bundle\") pod \"calico-kube-controllers-5b567484f5-slbhn\" (UID: \"04d42515-4a5b-418f-b47e-c07fd5f34d8b\") " pod="calico-system/calico-kube-controllers-5b567484f5-slbhn" Dec 13 01:33:37.447259 kubelet[2575]: I1213 01:33:37.447148 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/892b10bf-3a8d-4c3d-8649-291377d9695e-config-volume\") pod \"coredns-7db6d8ff4d-fmn8h\" (UID: \"892b10bf-3a8d-4c3d-8649-291377d9695e\") " pod="kube-system/coredns-7db6d8ff4d-fmn8h" Dec 13 01:33:37.380174 systemd[1]: Created slice kubepods-besteffort-pod3a651f28_b895_466b_a4fa_253090684670.slice - libcontainer container kubepods-besteffort-pod3a651f28_b895_466b_a4fa_253090684670.slice. Dec 13 01:33:37.447986 kubelet[2575]: I1213 01:33:37.447163 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkl9l\" (UniqueName: \"kubernetes.io/projected/ede8f753-82f0-4f13-acfc-752baf14716b-kube-api-access-dkl9l\") pod \"coredns-7db6d8ff4d-68jq8\" (UID: \"ede8f753-82f0-4f13-acfc-752baf14716b\") " pod="kube-system/coredns-7db6d8ff4d-68jq8" Dec 13 01:33:37.447986 kubelet[2575]: I1213 01:33:37.447181 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a651f28-b895-466b-a4fa-253090684670-calico-apiserver-certs\") pod \"calico-apiserver-7759d578c8-fj2gm\" (UID: \"3a651f28-b895-466b-a4fa-253090684670\") " pod="calico-apiserver/calico-apiserver-7759d578c8-fj2gm" Dec 13 01:33:37.447986 kubelet[2575]: I1213 01:33:37.447197 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjnp\" (UniqueName: \"kubernetes.io/projected/3a651f28-b895-466b-a4fa-253090684670-kube-api-access-dxjnp\") pod \"calico-apiserver-7759d578c8-fj2gm\" (UID: \"3a651f28-b895-466b-a4fa-253090684670\") " pod="calico-apiserver/calico-apiserver-7759d578c8-fj2gm" Dec 13 01:33:37.447986 kubelet[2575]: I1213 01:33:37.447211 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm4bj\" (UniqueName: \"kubernetes.io/projected/6f95fdeb-1056-4a6f-ba9e-df8029b239a1-kube-api-access-hm4bj\") pod \"calico-apiserver-7759d578c8-pbvdn\" (UID: \"6f95fdeb-1056-4a6f-ba9e-df8029b239a1\") " pod="calico-apiserver/calico-apiserver-7759d578c8-pbvdn" Dec 13 01:33:37.447986 kubelet[2575]: I1213 01:33:37.447229 2575 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f95fdeb-1056-4a6f-ba9e-df8029b239a1-calico-apiserver-certs\") pod \"calico-apiserver-7759d578c8-pbvdn\" (UID: \"6f95fdeb-1056-4a6f-ba9e-df8029b239a1\") " pod="calico-apiserver/calico-apiserver-7759d578c8-pbvdn" Dec 13 01:33:37.471116 kubelet[2575]: E1213 01:33:37.471077 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:37.565463 containerd[1464]: time="2024-12-13T01:33:37.565268773Z" level=info msg="shim disconnected" id=e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0 namespace=k8s.io Dec 13 01:33:37.565463 containerd[1464]: time="2024-12-13T01:33:37.565381715Z" level=warning msg="cleaning up after shim disconnected" id=e84d525c5485f8b26fc0a8f8cbf0b5786474f9953b27157e385155fd20b07ff0 namespace=k8s.io Dec 13 01:33:37.566370 containerd[1464]: time="2024-12-13T01:33:37.565393568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:37.663098 kubelet[2575]: E1213 01:33:37.663053 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:37.663863 containerd[1464]: time="2024-12-13T01:33:37.663797682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmn8h,Uid:892b10bf-3a8d-4c3d-8649-291377d9695e,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:37.668449 kubelet[2575]: E1213 01:33:37.668405 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:37.668900 containerd[1464]: time="2024-12-13T01:33:37.668858916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-68jq8,Uid:ede8f753-82f0-4f13-acfc-752baf14716b,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:37.673077 containerd[1464]: time="2024-12-13T01:33:37.673027498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b567484f5-slbhn,Uid:04d42515-4a5b-418f-b47e-c07fd5f34d8b,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:37.678816 containerd[1464]: time="2024-12-13T01:33:37.678764755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7759d578c8-pbvdn,Uid:6f95fdeb-1056-4a6f-ba9e-df8029b239a1,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:33:37.683457 containerd[1464]: time="2024-12-13T01:33:37.683410236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7759d578c8-fj2gm,Uid:3a651f28-b895-466b-a4fa-253090684670,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:33:37.788008 containerd[1464]: time="2024-12-13T01:33:37.787822127Z" level=error msg="Failed to destroy network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.788434 containerd[1464]: time="2024-12-13T01:33:37.788397581Z" level=error msg="Failed to destroy network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.789176 containerd[1464]: time="2024-12-13T01:33:37.789148676Z" level=error msg="encountered an error cleaning up failed sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.789293 containerd[1464]: time="2024-12-13T01:33:37.789271407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b567484f5-slbhn,Uid:04d42515-4a5b-418f-b47e-c07fd5f34d8b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.789497 containerd[1464]: time="2024-12-13T01:33:37.789382526Z" level=error msg="encountered an error cleaning up failed sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.789745 kubelet[2575]: E1213 01:33:37.789709 2575 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.790403 kubelet[2575]: E1213 01:33:37.789958 2575 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b567484f5-slbhn" Dec 13 01:33:37.790403 kubelet[2575]: E1213 01:33:37.789989 2575 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b567484f5-slbhn" Dec 13 01:33:37.790403 kubelet[2575]: E1213 01:33:37.790044 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b567484f5-slbhn_calico-system(04d42515-4a5b-418f-b47e-c07fd5f34d8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b567484f5-slbhn_calico-system(04d42515-4a5b-418f-b47e-c07fd5f34d8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b567484f5-slbhn" podUID="04d42515-4a5b-418f-b47e-c07fd5f34d8b" Dec 13 01:33:37.790527 containerd[1464]: time="2024-12-13T01:33:37.790327517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-68jq8,Uid:ede8f753-82f0-4f13-acfc-752baf14716b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.790768 kubelet[2575]: E1213 01:33:37.790722 2575 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.790947 kubelet[2575]: E1213 01:33:37.790898 2575 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-68jq8" Dec 13 01:33:37.790994 kubelet[2575]: E1213 01:33:37.790951 2575 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-68jq8" Dec 13 01:33:37.791023 kubelet[2575]: E1213 01:33:37.790995 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-68jq8_kube-system(ede8f753-82f0-4f13-acfc-752baf14716b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-68jq8_kube-system(ede8f753-82f0-4f13-acfc-752baf14716b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-68jq8" podUID="ede8f753-82f0-4f13-acfc-752baf14716b" Dec 13 01:33:37.804335 containerd[1464]: time="2024-12-13T01:33:37.804272835Z" level=error msg="Failed to destroy network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.805354 containerd[1464]: time="2024-12-13T01:33:37.805299439Z" level=error msg="encountered an error cleaning up failed sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.805656 containerd[1464]: time="2024-12-13T01:33:37.805401261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmn8h,Uid:892b10bf-3a8d-4c3d-8649-291377d9695e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.805913 kubelet[2575]: E1213 01:33:37.805816 2575 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.806008 kubelet[2575]: E1213 01:33:37.805928 2575 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fmn8h" Dec 13 01:33:37.806008 kubelet[2575]: E1213 01:33:37.805950 2575 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fmn8h" Dec 13 01:33:37.806078 kubelet[2575]: E1213 01:33:37.806003 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fmn8h_kube-system(892b10bf-3a8d-4c3d-8649-291377d9695e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fmn8h_kube-system(892b10bf-3a8d-4c3d-8649-291377d9695e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fmn8h" podUID="892b10bf-3a8d-4c3d-8649-291377d9695e" Dec 13 01:33:37.810744 containerd[1464]: time="2024-12-13T01:33:37.810705071Z" level=error msg="Failed to destroy network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.811078 containerd[1464]: time="2024-12-13T01:33:37.811043649Z" level=error msg="encountered an error cleaning up failed sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.811121 containerd[1464]: time="2024-12-13T01:33:37.811086660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7759d578c8-fj2gm,Uid:3a651f28-b895-466b-a4fa-253090684670,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.811293 kubelet[2575]: E1213 01:33:37.811227 2575 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.811293 kubelet[2575]: E1213 01:33:37.811292 2575 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7759d578c8-fj2gm" Dec 13 01:33:37.811478 kubelet[2575]: E1213 01:33:37.811307 2575 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7759d578c8-fj2gm" Dec 13 01:33:37.811478 kubelet[2575]: E1213 01:33:37.811358 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7759d578c8-fj2gm_calico-apiserver(3a651f28-b895-466b-a4fa-253090684670)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7759d578c8-fj2gm_calico-apiserver(3a651f28-b895-466b-a4fa-253090684670)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7759d578c8-fj2gm" podUID="3a651f28-b895-466b-a4fa-253090684670" Dec 13 01:33:37.817536 containerd[1464]: time="2024-12-13T01:33:37.817428186Z" level=error msg="Failed to destroy network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.817955 containerd[1464]: time="2024-12-13T01:33:37.817917718Z" level=error msg="encountered an error cleaning up failed sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.818000 containerd[1464]: time="2024-12-13T01:33:37.817969005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7759d578c8-pbvdn,Uid:6f95fdeb-1056-4a6f-ba9e-df8029b239a1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.818277 kubelet[2575]: E1213 01:33:37.818236 2575 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:37.818359 kubelet[2575]: E1213 01:33:37.818304 2575 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7759d578c8-pbvdn" Dec 13 01:33:37.818359 kubelet[2575]: E1213 01:33:37.818338 2575 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7759d578c8-pbvdn" Dec 13 01:33:37.818438 kubelet[2575]: E1213 01:33:37.818405 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7759d578c8-pbvdn_calico-apiserver(6f95fdeb-1056-4a6f-ba9e-df8029b239a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7759d578c8-pbvdn_calico-apiserver(6f95fdeb-1056-4a6f-ba9e-df8029b239a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7759d578c8-pbvdn" podUID="6f95fdeb-1056-4a6f-ba9e-df8029b239a1" Dec 13 01:33:38.394783 systemd[1]: Created slice kubepods-besteffort-pod2aa326b4_c51a_4e10_93c9_213b40c6cdc7.slice - libcontainer container kubepods-besteffort-pod2aa326b4_c51a_4e10_93c9_213b40c6cdc7.slice. Dec 13 01:33:38.397414 containerd[1464]: time="2024-12-13T01:33:38.397374526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mp76n,Uid:2aa326b4-c51a-4e10-93c9-213b40c6cdc7,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:38.466274 containerd[1464]: time="2024-12-13T01:33:38.466207261Z" level=error msg="Failed to destroy network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.466713 containerd[1464]: time="2024-12-13T01:33:38.466678508Z" level=error msg="encountered an error cleaning up failed sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.466768 containerd[1464]: time="2024-12-13T01:33:38.466744092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mp76n,Uid:2aa326b4-c51a-4e10-93c9-213b40c6cdc7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.467122 kubelet[2575]: E1213 01:33:38.467049 2575 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.467122 kubelet[2575]: E1213 01:33:38.467126 2575 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mp76n" Dec 13 01:33:38.467588 kubelet[2575]: E1213 01:33:38.467148 2575 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mp76n" Dec 13 01:33:38.467588 kubelet[2575]: E1213 01:33:38.467198 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mp76n_calico-system(2aa326b4-c51a-4e10-93c9-213b40c6cdc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mp76n_calico-system(2aa326b4-c51a-4e10-93c9-213b40c6cdc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:38.468768 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba-shm.mount: Deactivated successfully. Dec 13 01:33:38.473972 kubelet[2575]: I1213 01:33:38.473938 2575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:33:38.474537 containerd[1464]: time="2024-12-13T01:33:38.474498525Z" level=info msg="StopPodSandbox for \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\"" Dec 13 01:33:38.474778 containerd[1464]: time="2024-12-13T01:33:38.474750119Z" level=info msg="Ensure that sandbox ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c in task-service has been cleanup successfully" Dec 13 01:33:38.476081 kubelet[2575]: I1213 01:33:38.476039 2575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:33:38.476626 containerd[1464]: time="2024-12-13T01:33:38.476570637Z" level=info msg="StopPodSandbox for \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\"" Dec 13 01:33:38.476793 containerd[1464]: time="2024-12-13T01:33:38.476750456Z" level=info msg="Ensure that sandbox 1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde in task-service has been cleanup successfully" Dec 13 01:33:38.481366 kubelet[2575]: E1213 01:33:38.481043 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:38.482341 containerd[1464]: time="2024-12-13T01:33:38.482287383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:33:38.484319 kubelet[2575]: I1213 01:33:38.484277 2575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:33:38.487025 containerd[1464]: time="2024-12-13T01:33:38.486982034Z" level=info msg="StopPodSandbox for \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\"" Dec 13 01:33:38.487204 containerd[1464]: time="2024-12-13T01:33:38.487171821Z" level=info msg="Ensure that sandbox 95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15 in task-service has been cleanup successfully" Dec 13 01:33:38.493525 kubelet[2575]: I1213 01:33:38.492388 2575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:33:38.495231 containerd[1464]: time="2024-12-13T01:33:38.495188648Z" level=info msg="StopPodSandbox for \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\"" Dec 13 01:33:38.495467 containerd[1464]: time="2024-12-13T01:33:38.495364449Z" level=info msg="Ensure that sandbox 5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf in task-service has been cleanup successfully" Dec 13 01:33:38.499325 kubelet[2575]: I1213 01:33:38.499283 2575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:33:38.500355 containerd[1464]: time="2024-12-13T01:33:38.499943091Z" level=info msg="StopPodSandbox for \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\"" Dec 13 01:33:38.500355 containerd[1464]: time="2024-12-13T01:33:38.500117791Z" level=info msg="Ensure that sandbox 2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba in task-service has been cleanup successfully" Dec 13 01:33:38.505984 kubelet[2575]: I1213 01:33:38.505951 2575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:33:38.506904 containerd[1464]: time="2024-12-13T01:33:38.506864976Z" level=info msg="StopPodSandbox for \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\"" Dec 13 01:33:38.508929 containerd[1464]: time="2024-12-13T01:33:38.508863460Z" level=info msg="Ensure that sandbox bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590 in task-service has been cleanup successfully" Dec 13 01:33:38.521960 containerd[1464]: time="2024-12-13T01:33:38.521502732Z" level=error msg="StopPodSandbox for \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\" failed" error="failed to destroy network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.523118 kubelet[2575]: E1213 01:33:38.523073 2575 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:33:38.523200 kubelet[2575]: E1213 01:33:38.523142 2575 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c"} Dec 13 01:33:38.523226 kubelet[2575]: E1213 01:33:38.523216 2575 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ede8f753-82f0-4f13-acfc-752baf14716b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:38.523296 kubelet[2575]: E1213 01:33:38.523240 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ede8f753-82f0-4f13-acfc-752baf14716b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-68jq8" podUID="ede8f753-82f0-4f13-acfc-752baf14716b" Dec 13 01:33:38.535321 containerd[1464]: time="2024-12-13T01:33:38.535269938Z" level=error msg="StopPodSandbox for \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\" failed" error="failed to destroy network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.535798 kubelet[2575]: E1213 01:33:38.535667 2575 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:33:38.535798 kubelet[2575]: E1213 01:33:38.535716 2575 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15"} Dec 13 01:33:38.535798 kubelet[2575]: E1213 01:33:38.535749 2575 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a651f28-b895-466b-a4fa-253090684670\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:38.535798 kubelet[2575]: E1213 01:33:38.535772 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a651f28-b895-466b-a4fa-253090684670\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7759d578c8-fj2gm" podUID="3a651f28-b895-466b-a4fa-253090684670" Dec 13 01:33:38.546245 containerd[1464]: time="2024-12-13T01:33:38.546177770Z" level=error msg="StopPodSandbox for \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\" failed" error="failed to destroy network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.546501 kubelet[2575]: E1213 01:33:38.546454 2575 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:33:38.546557 kubelet[2575]: E1213 01:33:38.546510 2575 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf"} Dec 13 01:33:38.546557 kubelet[2575]: E1213 01:33:38.546547 2575 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04d42515-4a5b-418f-b47e-c07fd5f34d8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:38.546647 kubelet[2575]: E1213 01:33:38.546573 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04d42515-4a5b-418f-b47e-c07fd5f34d8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b567484f5-slbhn" podUID="04d42515-4a5b-418f-b47e-c07fd5f34d8b" Dec 13 01:33:38.551175 containerd[1464]: time="2024-12-13T01:33:38.551108265Z" level=error msg="StopPodSandbox for \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\" failed" error="failed to destroy network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.551689 kubelet[2575]: E1213 01:33:38.551632 2575 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:33:38.551752 kubelet[2575]: E1213 01:33:38.551692 2575 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde"} Dec 13 01:33:38.551752 kubelet[2575]: E1213 01:33:38.551735 2575 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"892b10bf-3a8d-4c3d-8649-291377d9695e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:38.551857 kubelet[2575]: E1213 01:33:38.551759 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"892b10bf-3a8d-4c3d-8649-291377d9695e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fmn8h" podUID="892b10bf-3a8d-4c3d-8649-291377d9695e" Dec 13 01:33:38.556505 containerd[1464]: time="2024-12-13T01:33:38.556414808Z" level=error msg="StopPodSandbox for \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\" failed" error="failed to destroy network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.556665 kubelet[2575]: E1213 01:33:38.556620 2575 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:33:38.556707 kubelet[2575]: E1213 01:33:38.556659 2575 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba"} Dec 13 01:33:38.556707 kubelet[2575]: E1213 01:33:38.556689 2575 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2aa326b4-c51a-4e10-93c9-213b40c6cdc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:38.556793 kubelet[2575]: E1213 01:33:38.556713 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2aa326b4-c51a-4e10-93c9-213b40c6cdc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mp76n" podUID="2aa326b4-c51a-4e10-93c9-213b40c6cdc7" Dec 13 01:33:38.558169 containerd[1464]: time="2024-12-13T01:33:38.558135909Z" level=error msg="StopPodSandbox for \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\" failed" error="failed to destroy network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:38.558432 kubelet[2575]: E1213 01:33:38.558368 2575 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:33:38.558484 kubelet[2575]: E1213 01:33:38.558448 2575 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590"} Dec 13 01:33:38.558527 kubelet[2575]: E1213 01:33:38.558495 2575 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f95fdeb-1056-4a6f-ba9e-df8029b239a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:38.558581 kubelet[2575]: E1213 01:33:38.558542 2575 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f95fdeb-1056-4a6f-ba9e-df8029b239a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7759d578c8-pbvdn" podUID="6f95fdeb-1056-4a6f-ba9e-df8029b239a1" Dec 13 01:33:40.760476 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:44922.service - OpenSSH per-connection server daemon (10.0.0.1:44922). Dec 13 01:33:41.052887 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 44922 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:33:41.055029 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:41.060826 systemd-logind[1445]: New session 9 of user core. Dec 13 01:33:41.067134 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:33:41.201030 sshd[3788]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:41.208665 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:44922.service: Deactivated successfully. Dec 13 01:33:41.212083 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:33:41.213763 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:33:41.215505 systemd-logind[1445]: Removed session 9. Dec 13 01:33:43.351999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047784398.mount: Deactivated successfully. Dec 13 01:33:44.026215 containerd[1464]: time="2024-12-13T01:33:44.026114370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:44.029695 containerd[1464]: time="2024-12-13T01:33:44.029645982Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:44.033605 containerd[1464]: time="2024-12-13T01:33:44.033468000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:44.033944 containerd[1464]: time="2024-12-13T01:33:44.033906915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:33:44.034324 containerd[1464]: time="2024-12-13T01:33:44.034287832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.551957849s" Dec 13 01:33:44.034324 containerd[1464]: time="2024-12-13T01:33:44.034321124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:33:44.049477 containerd[1464]: time="2024-12-13T01:33:44.049420953Z" level=info msg="CreateContainer within sandbox \"2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:33:44.075248 containerd[1464]: time="2024-12-13T01:33:44.075180558Z" level=info msg="CreateContainer within sandbox \"2702894749d030f8cd11d86b00223176f1a197c9959c9e1cb7982c9930765fcb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"93b1be3781ab69ac9010da4902d16221cf88662763081c6fa685070b18ec7719\"" Dec 13 01:33:44.075815 containerd[1464]: time="2024-12-13T01:33:44.075788561Z" level=info msg="StartContainer for \"93b1be3781ab69ac9010da4902d16221cf88662763081c6fa685070b18ec7719\"" Dec 13 01:33:44.155180 systemd[1]: Started cri-containerd-93b1be3781ab69ac9010da4902d16221cf88662763081c6fa685070b18ec7719.scope - libcontainer container 93b1be3781ab69ac9010da4902d16221cf88662763081c6fa685070b18ec7719. Dec 13 01:33:44.193898 containerd[1464]: time="2024-12-13T01:33:44.193811914Z" level=info msg="StartContainer for \"93b1be3781ab69ac9010da4902d16221cf88662763081c6fa685070b18ec7719\" returns successfully" Dec 13 01:33:44.293789 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:33:44.293988 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:33:44.522644 kubelet[2575]: E1213 01:33:44.522594 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:44.553207 kubelet[2575]: I1213 01:33:44.552810 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cjv7v" podStartSLOduration=2.403189764 podStartE2EDuration="20.55279331s" podCreationTimestamp="2024-12-13 01:33:24 +0000 UTC" firstStartedPulling="2024-12-13 01:33:25.885648639 +0000 UTC m=+22.579310845" lastFinishedPulling="2024-12-13 01:33:44.035252175 +0000 UTC m=+40.728914391" observedRunningTime="2024-12-13 01:33:44.55062449 +0000 UTC m=+41.244286706" watchObservedRunningTime="2024-12-13 01:33:44.55279331 +0000 UTC m=+41.246455516" Dec 13 01:33:46.219060 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:50670.service - OpenSSH per-connection server daemon (10.0.0.1:50670). Dec 13 01:33:46.279692 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 50670 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:33:46.282233 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:46.287661 systemd-logind[1445]: New session 10 of user core. Dec 13 01:33:46.301173 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:33:46.503998 sshd[3980]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:46.509581 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:50670.service: Deactivated successfully. Dec 13 01:33:46.512461 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:33:46.513534 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:33:46.514909 systemd-logind[1445]: Removed session 10. Dec 13 01:33:49.390008 containerd[1464]: time="2024-12-13T01:33:49.389580820Z" level=info msg="StopPodSandbox for \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\"" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.491 [INFO][4086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.497 [INFO][4086] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" iface="eth0" netns="/var/run/netns/cni-3cb5f45d-b140-c765-f2fb-7a55ad20228d" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.497 [INFO][4086] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" iface="eth0" netns="/var/run/netns/cni-3cb5f45d-b140-c765-f2fb-7a55ad20228d" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.498 [INFO][4086] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" iface="eth0" netns="/var/run/netns/cni-3cb5f45d-b140-c765-f2fb-7a55ad20228d" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.498 [INFO][4086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.498 [INFO][4086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.554 [INFO][4096] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.554 [INFO][4096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.555 [INFO][4096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.561 [WARNING][4096] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.561 [INFO][4096] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.562 [INFO][4096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:49.568901 containerd[1464]: 2024-12-13 01:33:49.565 [INFO][4086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:33:49.569553 containerd[1464]: time="2024-12-13T01:33:49.569127415Z" level=info msg="TearDown network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\" successfully" Dec 13 01:33:49.569553 containerd[1464]: time="2024-12-13T01:33:49.569160447Z" level=info msg="StopPodSandbox for \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\" returns successfully" Dec 13 01:33:49.570101 containerd[1464]: time="2024-12-13T01:33:49.570009161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b567484f5-slbhn,Uid:04d42515-4a5b-418f-b47e-c07fd5f34d8b,Namespace:calico-system,Attempt:1,}" Dec 13 01:33:49.572634 systemd[1]: run-netns-cni\x2d3cb5f45d\x2db140\x2dc765\x2df2fb\x2d7a55ad20228d.mount: Deactivated successfully. Dec 13 01:33:49.964956 systemd-networkd[1390]: cali53e602bf64d: Link UP Dec 13 01:33:49.965252 systemd-networkd[1390]: cali53e602bf64d: Gained carrier Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.865 [INFO][4105] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.876 [INFO][4105] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0 calico-kube-controllers-5b567484f5- calico-system 04d42515-4a5b-418f-b47e-c07fd5f34d8b 844 0 2024-12-13 01:33:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b567484f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b567484f5-slbhn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali53e602bf64d [] []}} ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Namespace="calico-system" Pod="calico-kube-controllers-5b567484f5-slbhn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.876 [INFO][4105] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Namespace="calico-system" Pod="calico-kube-controllers-5b567484f5-slbhn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.913 [INFO][4117] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" HandleID="k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.924 [INFO][4117] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" HandleID="k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011d680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b567484f5-slbhn", "timestamp":"2024-12-13 01:33:49.91331829 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.925 [INFO][4117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.925 [INFO][4117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.925 [INFO][4117] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.927 [INFO][4117] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.933 [INFO][4117] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.938 [INFO][4117] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.939 [INFO][4117] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.942 [INFO][4117] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.942 [INFO][4117] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.943 [INFO][4117] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189 Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.947 [INFO][4117] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.953 [INFO][4117] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.953 [INFO][4117] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" host="localhost" Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.953 [INFO][4117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:49.980481 containerd[1464]: 2024-12-13 01:33:49.953 [INFO][4117] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" HandleID="k8s-pod-network.eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.981271 containerd[1464]: 2024-12-13 01:33:49.957 [INFO][4105] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Namespace="calico-system" Pod="calico-kube-controllers-5b567484f5-slbhn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0", GenerateName:"calico-kube-controllers-5b567484f5-", Namespace:"calico-system", SelfLink:"", UID:"04d42515-4a5b-418f-b47e-c07fd5f34d8b", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b567484f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b567484f5-slbhn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53e602bf64d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:49.981271 containerd[1464]: 2024-12-13 01:33:49.957 [INFO][4105] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Namespace="calico-system" Pod="calico-kube-controllers-5b567484f5-slbhn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.981271 containerd[1464]: 2024-12-13 01:33:49.957 [INFO][4105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53e602bf64d ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Namespace="calico-system" Pod="calico-kube-controllers-5b567484f5-slbhn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.981271 containerd[1464]: 2024-12-13 01:33:49.965 [INFO][4105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Namespace="calico-system" Pod="calico-kube-controllers-5b567484f5-slbhn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:49.981271 containerd[1464]: 2024-12-13 01:33:49.965 [INFO][4105] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Namespace="calico-system" Pod="calico-kube-controllers-5b567484f5-slbhn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0", GenerateName:"calico-kube-controllers-5b567484f5-", Namespace:"calico-system", SelfLink:"", UID:"04d42515-4a5b-418f-b47e-c07fd5f34d8b", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b567484f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189", Pod:"calico-kube-controllers-5b567484f5-slbhn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53e602bf64d", MAC:"8e:dc:0a:42:f1:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:49.981271 containerd[1464]: 2024-12-13 01:33:49.976 [INFO][4105] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189" Namespace="calico-system" Pod="calico-kube-controllers-5b567484f5-slbhn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:33:50.028894 containerd[1464]: time="2024-12-13T01:33:50.026592639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:50.028894 containerd[1464]: time="2024-12-13T01:33:50.026711032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:50.028894 containerd[1464]: time="2024-12-13T01:33:50.026732843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:50.028894 containerd[1464]: time="2024-12-13T01:33:50.026896450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:50.056972 systemd[1]: Started cri-containerd-eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189.scope - libcontainer container eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189. Dec 13 01:33:50.071379 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:50.096809 containerd[1464]: time="2024-12-13T01:33:50.096746937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b567484f5-slbhn,Uid:04d42515-4a5b-418f-b47e-c07fd5f34d8b,Namespace:calico-system,Attempt:1,} returns sandbox id \"eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189\"" Dec 13 01:33:50.098430 containerd[1464]: time="2024-12-13T01:33:50.098401005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:33:50.389399 containerd[1464]: time="2024-12-13T01:33:50.389081883Z" level=info msg="StopPodSandbox for \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\"" Dec 13 01:33:50.389399 containerd[1464]: time="2024-12-13T01:33:50.389274956Z" level=info msg="StopPodSandbox for \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\"" Dec 13 01:33:50.389604 containerd[1464]: time="2024-12-13T01:33:50.389542017Z" level=info msg="StopPodSandbox for \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\"" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.494 [INFO][4237] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.494 [INFO][4237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" iface="eth0" netns="/var/run/netns/cni-d134a343-a4be-7b5e-89ac-03adfe1fc287" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.495 [INFO][4237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" iface="eth0" netns="/var/run/netns/cni-d134a343-a4be-7b5e-89ac-03adfe1fc287" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.495 [INFO][4237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" iface="eth0" netns="/var/run/netns/cni-d134a343-a4be-7b5e-89ac-03adfe1fc287" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.495 [INFO][4237] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.495 [INFO][4237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.549 [INFO][4271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.550 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.550 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.559 [WARNING][4271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.560 [INFO][4271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.562 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:50.569732 containerd[1464]: 2024-12-13 01:33:50.566 [INFO][4237] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:33:50.571023 containerd[1464]: time="2024-12-13T01:33:50.570970856Z" level=info msg="TearDown network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\" successfully" Dec 13 01:33:50.571023 containerd[1464]: time="2024-12-13T01:33:50.571016381Z" level=info msg="StopPodSandbox for \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\" returns successfully" Dec 13 01:33:50.573632 kubelet[2575]: E1213 01:33:50.573110 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:50.573679 systemd[1]: run-netns-cni\x2dd134a343\x2da4be\x2d7b5e\x2d89ac\x2d03adfe1fc287.mount: Deactivated successfully. Dec 13 01:33:50.575015 containerd[1464]: time="2024-12-13T01:33:50.574963648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-68jq8,Uid:ede8f753-82f0-4f13-acfc-752baf14716b,Namespace:kube-system,Attempt:1,}" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.512 [INFO][4236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.512 [INFO][4236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" iface="eth0" netns="/var/run/netns/cni-67acfbe3-da87-70d1-e80e-88a52eabf99e" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.512 [INFO][4236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" iface="eth0" netns="/var/run/netns/cni-67acfbe3-da87-70d1-e80e-88a52eabf99e" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.513 [INFO][4236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" iface="eth0" netns="/var/run/netns/cni-67acfbe3-da87-70d1-e80e-88a52eabf99e" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.513 [INFO][4236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.513 [INFO][4236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.579 [INFO][4275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.579 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.579 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.587 [WARNING][4275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.587 [INFO][4275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.589 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:50.595938 containerd[1464]: 2024-12-13 01:33:50.593 [INFO][4236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:33:50.598186 containerd[1464]: time="2024-12-13T01:33:50.596387433Z" level=info msg="TearDown network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\" successfully" Dec 13 01:33:50.598186 containerd[1464]: time="2024-12-13T01:33:50.596419152Z" level=info msg="StopPodSandbox for \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\" returns successfully" Dec 13 01:33:50.601876 containerd[1464]: time="2024-12-13T01:33:50.600264919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mp76n,Uid:2aa326b4-c51a-4e10-93c9-213b40c6cdc7,Namespace:calico-system,Attempt:1,}" Dec 13 01:33:50.603889 systemd[1]: run-netns-cni\x2d67acfbe3\x2dda87\x2d70d1\x2de80e\x2d88a52eabf99e.mount: Deactivated successfully. Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.512 [INFO][4238] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.512 [INFO][4238] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" iface="eth0" netns="/var/run/netns/cni-5d109d86-b804-7ab4-288b-a5e3dcecba51" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.513 [INFO][4238] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" iface="eth0" netns="/var/run/netns/cni-5d109d86-b804-7ab4-288b-a5e3dcecba51" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.513 [INFO][4238] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" iface="eth0" netns="/var/run/netns/cni-5d109d86-b804-7ab4-288b-a5e3dcecba51" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.513 [INFO][4238] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.513 [INFO][4238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.582 [INFO][4276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.582 [INFO][4276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.589 [INFO][4276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.599 [WARNING][4276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.599 [INFO][4276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.602 [INFO][4276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:50.610918 containerd[1464]: 2024-12-13 01:33:50.607 [INFO][4238] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:33:50.611984 containerd[1464]: time="2024-12-13T01:33:50.611937431Z" level=info msg="TearDown network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\" successfully" Dec 13 01:33:50.611984 containerd[1464]: time="2024-12-13T01:33:50.611977997Z" level=info msg="StopPodSandbox for \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\" returns successfully" Dec 13 01:33:50.612877 containerd[1464]: time="2024-12-13T01:33:50.612807325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7759d578c8-fj2gm,Uid:3a651f28-b895-466b-a4fa-253090684670,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:33:50.614424 systemd[1]: run-netns-cni\x2d5d109d86\x2db804\x2d7ab4\x2d288b\x2da5e3dcecba51.mount: Deactivated successfully. Dec 13 01:33:50.725674 systemd-networkd[1390]: calif43aff931e6: Link UP Dec 13 01:33:50.727576 systemd-networkd[1390]: calif43aff931e6: Gained carrier Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.629 [INFO][4304] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.646 [INFO][4304] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0 coredns-7db6d8ff4d- kube-system ede8f753-82f0-4f13-acfc-752baf14716b 857 0 2024-12-13 01:33:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-68jq8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif43aff931e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68jq8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68jq8-" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.646 [INFO][4304] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68jq8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.681 [INFO][4325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" HandleID="k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.692 [INFO][4325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" HandleID="k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363bf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-68jq8", "timestamp":"2024-12-13 01:33:50.681214219 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.692 [INFO][4325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.692 [INFO][4325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.692 [INFO][4325] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.694 [INFO][4325] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.698 [INFO][4325] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.705 [INFO][4325] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.707 [INFO][4325] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.709 [INFO][4325] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.709 [INFO][4325] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.710 [INFO][4325] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.714 [INFO][4325] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.718 [INFO][4325] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.718 [INFO][4325] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" host="localhost" Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.718 [INFO][4325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:50.741321 containerd[1464]: 2024-12-13 01:33:50.718 [INFO][4325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" HandleID="k8s-pod-network.369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.742267 containerd[1464]: 2024-12-13 01:33:50.722 [INFO][4304] cni-plugin/k8s.go 386: Populated endpoint ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68jq8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ede8f753-82f0-4f13-acfc-752baf14716b", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-68jq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif43aff931e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:50.742267 containerd[1464]: 2024-12-13 01:33:50.722 [INFO][4304] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68jq8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.742267 containerd[1464]: 2024-12-13 01:33:50.723 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif43aff931e6 ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68jq8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.742267 containerd[1464]: 2024-12-13 01:33:50.726 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68jq8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.742267 containerd[1464]: 2024-12-13 01:33:50.726 [INFO][4304] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68jq8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ede8f753-82f0-4f13-acfc-752baf14716b", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd", Pod:"coredns-7db6d8ff4d-68jq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif43aff931e6", MAC:"e2:0f:68:05:13:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:50.742267 containerd[1464]: 2024-12-13 01:33:50.737 [INFO][4304] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68jq8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:33:50.766971 systemd-networkd[1390]: cali74465e011c2: Link UP Dec 13 01:33:50.768338 systemd-networkd[1390]: cali74465e011c2: Gained carrier Dec 13 01:33:50.780615 containerd[1464]: time="2024-12-13T01:33:50.780441714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:50.780615 containerd[1464]: time="2024-12-13T01:33:50.780530160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:50.780615 containerd[1464]: time="2024-12-13T01:33:50.780551620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:50.781398 containerd[1464]: time="2024-12-13T01:33:50.781058803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.646 [INFO][4311] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.659 [INFO][4311] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mp76n-eth0 csi-node-driver- calico-system 2aa326b4-c51a-4e10-93c9-213b40c6cdc7 859 0 2024-12-13 01:33:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mp76n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali74465e011c2 [] []}} ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Namespace="calico-system" Pod="csi-node-driver-mp76n" WorkloadEndpoint="localhost-k8s-csi--node--driver--mp76n-" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.660 [INFO][4311] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Namespace="calico-system" Pod="csi-node-driver-mp76n" WorkloadEndpoint="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.695 [INFO][4330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" HandleID="k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.708 [INFO][4330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" HandleID="k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mp76n", "timestamp":"2024-12-13 01:33:50.695675001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.708 [INFO][4330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.719 [INFO][4330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.719 [INFO][4330] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.720 [INFO][4330] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.725 [INFO][4330] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.731 [INFO][4330] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.733 [INFO][4330] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.739 [INFO][4330] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.739 [INFO][4330] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.741 [INFO][4330] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739 Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.745 [INFO][4330] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.753 [INFO][4330] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.753 [INFO][4330] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" host="localhost" Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.753 [INFO][4330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:50.787477 containerd[1464]: 2024-12-13 01:33:50.753 [INFO][4330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" HandleID="k8s-pod-network.180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.788299 containerd[1464]: 2024-12-13 01:33:50.759 [INFO][4311] cni-plugin/k8s.go 386: Populated endpoint ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Namespace="calico-system" Pod="csi-node-driver-mp76n" WorkloadEndpoint="localhost-k8s-csi--node--driver--mp76n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mp76n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2aa326b4-c51a-4e10-93c9-213b40c6cdc7", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mp76n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali74465e011c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:50.788299 containerd[1464]: 2024-12-13 01:33:50.760 [INFO][4311] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Namespace="calico-system" Pod="csi-node-driver-mp76n" WorkloadEndpoint="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.788299 containerd[1464]: 2024-12-13 01:33:50.760 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74465e011c2 ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Namespace="calico-system" Pod="csi-node-driver-mp76n" WorkloadEndpoint="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.788299 containerd[1464]: 2024-12-13 01:33:50.770 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Namespace="calico-system" Pod="csi-node-driver-mp76n" WorkloadEndpoint="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.788299 containerd[1464]: 2024-12-13 01:33:50.771 [INFO][4311] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Namespace="calico-system" Pod="csi-node-driver-mp76n" WorkloadEndpoint="localhost-k8s-csi--node--driver--mp76n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mp76n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2aa326b4-c51a-4e10-93c9-213b40c6cdc7", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739", Pod:"csi-node-driver-mp76n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali74465e011c2", MAC:"4e:4c:05:d1:1f:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:50.788299 containerd[1464]: 2024-12-13 01:33:50.784 [INFO][4311] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739" Namespace="calico-system" Pod="csi-node-driver-mp76n" WorkloadEndpoint="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:33:50.813913 systemd-networkd[1390]: cali6f7e593337b: Link UP Dec 13 01:33:50.814053 systemd[1]: Started cri-containerd-369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd.scope - libcontainer container 369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd. Dec 13 01:33:50.814985 systemd-networkd[1390]: cali6f7e593337b: Gained carrier Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.688 [INFO][4333] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.703 [INFO][4333] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0 calico-apiserver-7759d578c8- calico-apiserver 3a651f28-b895-466b-a4fa-253090684670 858 0 2024-12-13 01:33:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7759d578c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7759d578c8-fj2gm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6f7e593337b [] []}} ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-fj2gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.703 [INFO][4333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-fj2gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.753 [INFO][4355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" HandleID="k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.765 [INFO][4355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" HandleID="k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7759d578c8-fj2gm", "timestamp":"2024-12-13 01:33:50.753397328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.765 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.765 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.765 [INFO][4355] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.767 [INFO][4355] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.773 [INFO][4355] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.779 [INFO][4355] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.784 [INFO][4355] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.787 [INFO][4355] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.787 [INFO][4355] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.789 [INFO][4355] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.796 [INFO][4355] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.804 [INFO][4355] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.804 [INFO][4355] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" host="localhost" Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.804 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:50.831058 containerd[1464]: 2024-12-13 01:33:50.804 [INFO][4355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" HandleID="k8s-pod-network.bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.831607 containerd[1464]: 2024-12-13 01:33:50.811 [INFO][4333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-fj2gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0", GenerateName:"calico-apiserver-7759d578c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a651f28-b895-466b-a4fa-253090684670", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7759d578c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7759d578c8-fj2gm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f7e593337b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:50.831607 containerd[1464]: 2024-12-13 01:33:50.811 [INFO][4333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-fj2gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.831607 containerd[1464]: 2024-12-13 01:33:50.811 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f7e593337b ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-fj2gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.831607 containerd[1464]: 2024-12-13 01:33:50.813 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-fj2gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.831607 containerd[1464]: 2024-12-13 01:33:50.813 [INFO][4333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-fj2gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0", GenerateName:"calico-apiserver-7759d578c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a651f28-b895-466b-a4fa-253090684670", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7759d578c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b", Pod:"calico-apiserver-7759d578c8-fj2gm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f7e593337b", MAC:"c6:a4:60:a9:cb:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:50.831607 containerd[1464]: 2024-12-13 01:33:50.826 [INFO][4333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-fj2gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:33:50.833399 containerd[1464]: time="2024-12-13T01:33:50.833196348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:50.833399 containerd[1464]: time="2024-12-13T01:33:50.833252584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:50.833399 containerd[1464]: time="2024-12-13T01:33:50.833267372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:50.833399 containerd[1464]: time="2024-12-13T01:33:50.833354185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:50.835414 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:50.866161 systemd[1]: Started cri-containerd-180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739.scope - libcontainer container 180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739. Dec 13 01:33:50.869512 containerd[1464]: time="2024-12-13T01:33:50.869371620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:50.869512 containerd[1464]: time="2024-12-13T01:33:50.869460967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:50.869706 containerd[1464]: time="2024-12-13T01:33:50.869476187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:50.869706 containerd[1464]: time="2024-12-13T01:33:50.869607844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:50.874638 containerd[1464]: time="2024-12-13T01:33:50.874567814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-68jq8,Uid:ede8f753-82f0-4f13-acfc-752baf14716b,Namespace:kube-system,Attempt:1,} returns sandbox id \"369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd\"" Dec 13 01:33:50.876170 kubelet[2575]: E1213 01:33:50.875528 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:50.879733 containerd[1464]: time="2024-12-13T01:33:50.879682854Z" level=info msg="CreateContainer within sandbox \"369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:33:50.888008 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:50.895236 systemd[1]: Started cri-containerd-bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b.scope - libcontainer container bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b. Dec 13 01:33:50.905636 containerd[1464]: time="2024-12-13T01:33:50.905571027Z" level=info msg="CreateContainer within sandbox \"369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4c2849d6376bdf182ef8ebf229a40998037aaeed35b4376e84c52aee23ce7bf\"" Dec 13 01:33:50.906362 containerd[1464]: time="2024-12-13T01:33:50.906041241Z" level=info msg="StartContainer for \"b4c2849d6376bdf182ef8ebf229a40998037aaeed35b4376e84c52aee23ce7bf\"" Dec 13 01:33:50.908851 containerd[1464]: time="2024-12-13T01:33:50.908801878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mp76n,Uid:2aa326b4-c51a-4e10-93c9-213b40c6cdc7,Namespace:calico-system,Attempt:1,} returns sandbox id \"180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739\"" Dec 13 01:33:50.915253 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:50.947097 systemd[1]: Started cri-containerd-b4c2849d6376bdf182ef8ebf229a40998037aaeed35b4376e84c52aee23ce7bf.scope - libcontainer container b4c2849d6376bdf182ef8ebf229a40998037aaeed35b4376e84c52aee23ce7bf. Dec 13 01:33:50.950821 containerd[1464]: time="2024-12-13T01:33:50.950464811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7759d578c8-fj2gm,Uid:3a651f28-b895-466b-a4fa-253090684670,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b\"" Dec 13 01:33:50.984450 containerd[1464]: time="2024-12-13T01:33:50.984315326Z" level=info msg="StartContainer for \"b4c2849d6376bdf182ef8ebf229a40998037aaeed35b4376e84c52aee23ce7bf\" returns successfully" Dec 13 01:33:51.121946 kubelet[2575]: I1213 01:33:51.121623 2575 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:51.122525 kubelet[2575]: E1213 01:33:51.122488 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:51.391532 containerd[1464]: time="2024-12-13T01:33:51.391000243Z" level=info msg="StopPodSandbox for \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\"" Dec 13 01:33:51.394103 systemd-networkd[1390]: cali53e602bf64d: Gained IPv6LL Dec 13 01:33:51.396955 kernel: bpftool[4594]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:33:51.522294 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:50678.service - OpenSSH per-connection server daemon (10.0.0.1:50678). Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.486 [INFO][4606] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.486 [INFO][4606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" iface="eth0" netns="/var/run/netns/cni-38e82295-9367-4416-57eb-68bff9ca79e0" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.487 [INFO][4606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" iface="eth0" netns="/var/run/netns/cni-38e82295-9367-4416-57eb-68bff9ca79e0" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.488 [INFO][4606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" iface="eth0" netns="/var/run/netns/cni-38e82295-9367-4416-57eb-68bff9ca79e0" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.488 [INFO][4606] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.488 [INFO][4606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.517 [INFO][4613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.518 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.518 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.527 [WARNING][4613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.527 [INFO][4613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.529 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:51.537912 containerd[1464]: 2024-12-13 01:33:51.533 [INFO][4606] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:33:51.539885 containerd[1464]: time="2024-12-13T01:33:51.538671869Z" level=info msg="TearDown network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\" successfully" Dec 13 01:33:51.539885 containerd[1464]: time="2024-12-13T01:33:51.538701816Z" level=info msg="StopPodSandbox for \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\" returns successfully" Dec 13 01:33:51.539885 containerd[1464]: time="2024-12-13T01:33:51.539741099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7759d578c8-pbvdn,Uid:6f95fdeb-1056-4a6f-ba9e-df8029b239a1,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:33:51.553275 kubelet[2575]: E1213 01:33:51.553230 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:51.555747 kubelet[2575]: E1213 01:33:51.555616 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:51.578220 systemd[1]: run-netns-cni\x2d38e82295\x2d9367\x2d4416\x2d57eb\x2d68bff9ca79e0.mount: Deactivated successfully. Dec 13 01:33:51.648979 kubelet[2575]: I1213 01:33:51.648234 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-68jq8" podStartSLOduration=33.648217236 podStartE2EDuration="33.648217236s" podCreationTimestamp="2024-12-13 01:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:51.647710866 +0000 UTC m=+48.341373072" watchObservedRunningTime="2024-12-13 01:33:51.648217236 +0000 UTC m=+48.341879432" Dec 13 01:33:51.686790 sshd[4621]: Accepted publickey for core from 10.0.0.1 port 50678 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:33:51.689157 sshd[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:51.700743 systemd-logind[1445]: New session 11 of user core. Dec 13 01:33:51.705649 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:33:51.816749 systemd-networkd[1390]: vxlan.calico: Link UP Dec 13 01:33:51.816763 systemd-networkd[1390]: vxlan.calico: Gained carrier Dec 13 01:33:51.905661 systemd-networkd[1390]: cali74465e011c2: Gained IPv6LL Dec 13 01:33:51.935135 sshd[4621]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:51.946362 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:50678.service: Deactivated successfully. Dec 13 01:33:51.949799 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:33:51.950978 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:33:51.966300 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Dec 13 01:33:51.968447 systemd-logind[1445]: Removed session 11. Dec 13 01:33:51.999684 sshd[4736]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:33:52.001717 sshd[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:52.003303 systemd-networkd[1390]: calif8e5ef7d9a6: Link UP Dec 13 01:33:52.005095 systemd-networkd[1390]: calif8e5ef7d9a6: Gained carrier Dec 13 01:33:52.012596 systemd-logind[1445]: New session 12 of user core. Dec 13 01:33:52.017161 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:33:52.033032 systemd-networkd[1390]: cali6f7e593337b: Gained IPv6LL Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.785 [INFO][4662] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0 calico-apiserver-7759d578c8- calico-apiserver 6f95fdeb-1056-4a6f-ba9e-df8029b239a1 888 0 2024-12-13 01:33:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7759d578c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7759d578c8-pbvdn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8e5ef7d9a6 [] []}} ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-pbvdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.785 [INFO][4662] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-pbvdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.862 [INFO][4685] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" HandleID="k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.880 [INFO][4685] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" HandleID="k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365b30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7759d578c8-pbvdn", "timestamp":"2024-12-13 01:33:51.861551987 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.880 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.880 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.880 [INFO][4685] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.883 [INFO][4685] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.890 [INFO][4685] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.902 [INFO][4685] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.906 [INFO][4685] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.909 [INFO][4685] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.909 [INFO][4685] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.911 [INFO][4685] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032 Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.929 [INFO][4685] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.990 [INFO][4685] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.990 [INFO][4685] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" host="localhost" Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.990 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:52.062579 containerd[1464]: 2024-12-13 01:33:51.990 [INFO][4685] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" HandleID="k8s-pod-network.9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:52.064442 containerd[1464]: 2024-12-13 01:33:51.996 [INFO][4662] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-pbvdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0", GenerateName:"calico-apiserver-7759d578c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f95fdeb-1056-4a6f-ba9e-df8029b239a1", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7759d578c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7759d578c8-pbvdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8e5ef7d9a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:52.064442 containerd[1464]: 2024-12-13 01:33:51.997 [INFO][4662] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-pbvdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:52.064442 containerd[1464]: 2024-12-13 01:33:51.997 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8e5ef7d9a6 ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-pbvdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:52.064442 containerd[1464]: 2024-12-13 01:33:52.005 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-pbvdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:52.064442 containerd[1464]: 2024-12-13 01:33:52.006 [INFO][4662] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-pbvdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0", GenerateName:"calico-apiserver-7759d578c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f95fdeb-1056-4a6f-ba9e-df8029b239a1", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7759d578c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032", Pod:"calico-apiserver-7759d578c8-pbvdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8e5ef7d9a6", MAC:"22:6b:41:44:59:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:52.064442 containerd[1464]: 2024-12-13 01:33:52.056 [INFO][4662] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032" Namespace="calico-apiserver" Pod="calico-apiserver-7759d578c8-pbvdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:33:52.389669 containerd[1464]: time="2024-12-13T01:33:52.389488970Z" level=info msg="StopPodSandbox for \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\"" Dec 13 01:33:52.412733 sshd[4736]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:52.425486 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:50688.service: Deactivated successfully. Dec 13 01:33:52.430427 containerd[1464]: time="2024-12-13T01:33:52.430370828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:52.431975 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:33:52.435234 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:33:52.437087 containerd[1464]: time="2024-12-13T01:33:52.435989182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:33:52.438079 containerd[1464]: time="2024-12-13T01:33:52.437633471Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:52.447077 containerd[1464]: time="2024-12-13T01:33:52.447006999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:52.447730 containerd[1464]: time="2024-12-13T01:33:52.447689722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.349255204s" Dec 13 01:33:52.447730 containerd[1464]: time="2024-12-13T01:33:52.447726801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:33:52.448375 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:50694.service - OpenSSH per-connection server daemon (10.0.0.1:50694). Dec 13 01:33:52.451858 systemd-logind[1445]: Removed session 12. Dec 13 01:33:52.468092 containerd[1464]: time="2024-12-13T01:33:52.460384790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:33:52.468868 containerd[1464]: time="2024-12-13T01:33:52.468493783Z" level=info msg="CreateContainer within sandbox \"eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:33:52.475245 containerd[1464]: time="2024-12-13T01:33:52.475086207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:52.475245 containerd[1464]: time="2024-12-13T01:33:52.475187777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:52.475245 containerd[1464]: time="2024-12-13T01:33:52.475208547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:52.476315 containerd[1464]: time="2024-12-13T01:33:52.475340224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:52.492708 containerd[1464]: time="2024-12-13T01:33:52.492585681Z" level=info msg="CreateContainer within sandbox \"eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"76306fc96b6b39c9060546ab16b2cf78189431fdc595f5b5151b19e5ba146319\"" Dec 13 01:33:52.494196 containerd[1464]: time="2024-12-13T01:33:52.493632627Z" level=info msg="StartContainer for \"76306fc96b6b39c9060546ab16b2cf78189431fdc595f5b5151b19e5ba146319\"" Dec 13 01:33:52.497099 sshd[4843]: Accepted publickey for core from 10.0.0.1 port 50694 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:33:52.498652 sshd[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:52.522731 systemd[1]: Started cri-containerd-9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032.scope - libcontainer container 9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032. Dec 13 01:33:52.526571 systemd-logind[1445]: New session 13 of user core. Dec 13 01:33:52.529528 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.483 [INFO][4832] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.484 [INFO][4832] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" iface="eth0" netns="/var/run/netns/cni-7d050478-aff5-b12f-53cd-686d9394b558" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.484 [INFO][4832] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" iface="eth0" netns="/var/run/netns/cni-7d050478-aff5-b12f-53cd-686d9394b558" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.484 [INFO][4832] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" iface="eth0" netns="/var/run/netns/cni-7d050478-aff5-b12f-53cd-686d9394b558" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.484 [INFO][4832] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.484 [INFO][4832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.517 [INFO][4873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.518 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.518 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.530 [WARNING][4873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.530 [INFO][4873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.532 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:52.540718 containerd[1464]: 2024-12-13 01:33:52.536 [INFO][4832] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:33:52.541755 containerd[1464]: time="2024-12-13T01:33:52.541644739Z" level=info msg="TearDown network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\" successfully" Dec 13 01:33:52.541755 containerd[1464]: time="2024-12-13T01:33:52.541700113Z" level=info msg="StopPodSandbox for \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\" returns successfully" Dec 13 01:33:52.542199 kubelet[2575]: E1213 01:33:52.542175 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:52.543761 containerd[1464]: time="2024-12-13T01:33:52.543717553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmn8h,Uid:892b10bf-3a8d-4c3d-8649-291377d9695e,Namespace:kube-system,Attempt:1,}" Dec 13 01:33:52.546115 systemd[1]: Started cri-containerd-76306fc96b6b39c9060546ab16b2cf78189431fdc595f5b5151b19e5ba146319.scope - libcontainer container 76306fc96b6b39c9060546ab16b2cf78189431fdc595f5b5151b19e5ba146319. Dec 13 01:33:52.550683 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:52.562229 kubelet[2575]: E1213 01:33:52.562187 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:52.576048 systemd[1]: run-netns-cni\x2d7d050478\x2daff5\x2db12f\x2d53cd\x2d686d9394b558.mount: Deactivated successfully. Dec 13 01:33:52.599675 containerd[1464]: time="2024-12-13T01:33:52.599475115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7759d578c8-pbvdn,Uid:6f95fdeb-1056-4a6f-ba9e-df8029b239a1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032\"" Dec 13 01:33:52.619851 containerd[1464]: time="2024-12-13T01:33:52.619770340Z" level=info msg="StartContainer for \"76306fc96b6b39c9060546ab16b2cf78189431fdc595f5b5151b19e5ba146319\" returns successfully" Dec 13 01:33:52.709261 sshd[4843]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:52.720918 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:33:52.723243 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:50694.service: Deactivated successfully. Dec 13 01:33:52.729084 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:33:52.738197 systemd-networkd[1390]: calif43aff931e6: Gained IPv6LL Dec 13 01:33:52.739447 systemd-logind[1445]: Removed session 13. Dec 13 01:33:52.768518 systemd-networkd[1390]: caliab096e9d26b: Link UP Dec 13 01:33:52.769091 systemd-networkd[1390]: caliab096e9d26b: Gained carrier Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.640 [INFO][4918] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0 coredns-7db6d8ff4d- kube-system 892b10bf-3a8d-4c3d-8649-291377d9695e 922 0 2024-12-13 01:33:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-fmn8h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab096e9d26b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmn8h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmn8h-" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.640 [INFO][4918] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmn8h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.679 [INFO][4958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" HandleID="k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.698 [INFO][4958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" HandleID="k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001327d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-fmn8h", "timestamp":"2024-12-13 01:33:52.678984656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.698 [INFO][4958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.698 [INFO][4958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.698 [INFO][4958] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.700 [INFO][4958] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.707 [INFO][4958] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.714 [INFO][4958] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.717 [INFO][4958] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.720 [INFO][4958] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.720 [INFO][4958] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.723 [INFO][4958] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41 Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.732 [INFO][4958] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.753 [INFO][4958] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.753 [INFO][4958] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" host="localhost" Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.753 [INFO][4958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:52.789421 containerd[1464]: 2024-12-13 01:33:52.753 [INFO][4958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" HandleID="k8s-pod-network.dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.791410 containerd[1464]: 2024-12-13 01:33:52.762 [INFO][4918] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmn8h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"892b10bf-3a8d-4c3d-8649-291377d9695e", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-fmn8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab096e9d26b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:52.791410 containerd[1464]: 2024-12-13 01:33:52.763 [INFO][4918] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmn8h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.791410 containerd[1464]: 2024-12-13 01:33:52.763 [INFO][4918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab096e9d26b ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmn8h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.791410 containerd[1464]: 2024-12-13 01:33:52.769 [INFO][4918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmn8h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.791410 containerd[1464]: 2024-12-13 01:33:52.769 [INFO][4918] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmn8h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"892b10bf-3a8d-4c3d-8649-291377d9695e", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41", Pod:"coredns-7db6d8ff4d-fmn8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab096e9d26b", MAC:"0e:d5:76:04:5d:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:52.791410 containerd[1464]: 2024-12-13 01:33:52.782 [INFO][4918] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmn8h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:33:52.866443 containerd[1464]: time="2024-12-13T01:33:52.866264927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:52.866443 containerd[1464]: time="2024-12-13T01:33:52.866360416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:52.866443 containerd[1464]: time="2024-12-13T01:33:52.866375915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:52.867017 containerd[1464]: time="2024-12-13T01:33:52.866903347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:52.898113 systemd[1]: Started cri-containerd-dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41.scope - libcontainer container dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41. Dec 13 01:33:52.914670 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:33:52.930128 systemd-networkd[1390]: vxlan.calico: Gained IPv6LL Dec 13 01:33:52.943573 containerd[1464]: time="2024-12-13T01:33:52.943499985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmn8h,Uid:892b10bf-3a8d-4c3d-8649-291377d9695e,Namespace:kube-system,Attempt:1,} returns sandbox id \"dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41\"" Dec 13 01:33:52.944446 kubelet[2575]: E1213 01:33:52.944422 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:52.947729 containerd[1464]: time="2024-12-13T01:33:52.947694945Z" level=info msg="CreateContainer within sandbox \"dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:33:52.966128 containerd[1464]: time="2024-12-13T01:33:52.965981958Z" level=info msg="CreateContainer within sandbox \"dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c585e9000840f5eac30ede8b06a5d3f1d6a47a0f598b485bd7ea2161fa091e99\"" Dec 13 01:33:52.966951 containerd[1464]: time="2024-12-13T01:33:52.966810925Z" level=info msg="StartContainer for \"c585e9000840f5eac30ede8b06a5d3f1d6a47a0f598b485bd7ea2161fa091e99\"" Dec 13 01:33:53.006218 systemd[1]: Started cri-containerd-c585e9000840f5eac30ede8b06a5d3f1d6a47a0f598b485bd7ea2161fa091e99.scope - libcontainer container c585e9000840f5eac30ede8b06a5d3f1d6a47a0f598b485bd7ea2161fa091e99. Dec 13 01:33:53.042403 containerd[1464]: time="2024-12-13T01:33:53.042328448Z" level=info msg="StartContainer for \"c585e9000840f5eac30ede8b06a5d3f1d6a47a0f598b485bd7ea2161fa091e99\" returns successfully" Dec 13 01:33:53.566872 kubelet[2575]: E1213 01:33:53.566784 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:53.570310 kubelet[2575]: E1213 01:33:53.570144 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:53.602443 kubelet[2575]: I1213 01:33:53.602369 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fmn8h" podStartSLOduration=35.602230279 podStartE2EDuration="35.602230279s" podCreationTimestamp="2024-12-13 01:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:53.585686824 +0000 UTC m=+50.279349030" watchObservedRunningTime="2024-12-13 01:33:53.602230279 +0000 UTC m=+50.295892485" Dec 13 01:33:53.740426 kubelet[2575]: I1213 01:33:53.740337 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b567484f5-slbhn" podStartSLOduration=27.385790393 podStartE2EDuration="29.740314366s" podCreationTimestamp="2024-12-13 01:33:24 +0000 UTC" firstStartedPulling="2024-12-13 01:33:50.098029206 +0000 UTC m=+46.791691412" lastFinishedPulling="2024-12-13 01:33:52.452553179 +0000 UTC m=+49.146215385" observedRunningTime="2024-12-13 01:33:53.636379562 +0000 UTC m=+50.330041768" watchObservedRunningTime="2024-12-13 01:33:53.740314366 +0000 UTC m=+50.433976572" Dec 13 01:33:53.825184 systemd-networkd[1390]: calif8e5ef7d9a6: Gained IPv6LL Dec 13 01:33:54.017077 systemd-networkd[1390]: caliab096e9d26b: Gained IPv6LL Dec 13 01:33:54.426759 containerd[1464]: time="2024-12-13T01:33:54.426682021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:54.427716 containerd[1464]: time="2024-12-13T01:33:54.427653255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:33:54.428822 containerd[1464]: time="2024-12-13T01:33:54.428763550Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:54.431048 containerd[1464]: time="2024-12-13T01:33:54.431002205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:54.431716 containerd[1464]: time="2024-12-13T01:33:54.431671802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.971228202s" Dec 13 01:33:54.431716 containerd[1464]: time="2024-12-13T01:33:54.431714703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:33:54.432668 containerd[1464]: time="2024-12-13T01:33:54.432640502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:33:54.434331 containerd[1464]: time="2024-12-13T01:33:54.434280882Z" level=info msg="CreateContainer within sandbox \"180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:33:54.453144 containerd[1464]: time="2024-12-13T01:33:54.453081763Z" level=info msg="CreateContainer within sandbox \"180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ef5eed297575fc405cc93413b2fa159426a572bdac0046b7a2690948471bac8c\"" Dec 13 01:33:54.453735 containerd[1464]: time="2024-12-13T01:33:54.453702780Z" level=info msg="StartContainer for \"ef5eed297575fc405cc93413b2fa159426a572bdac0046b7a2690948471bac8c\"" Dec 13 01:33:54.487075 systemd[1]: Started cri-containerd-ef5eed297575fc405cc93413b2fa159426a572bdac0046b7a2690948471bac8c.scope - libcontainer container ef5eed297575fc405cc93413b2fa159426a572bdac0046b7a2690948471bac8c. Dec 13 01:33:54.526355 containerd[1464]: time="2024-12-13T01:33:54.526304115Z" level=info msg="StartContainer for \"ef5eed297575fc405cc93413b2fa159426a572bdac0046b7a2690948471bac8c\" returns successfully" Dec 13 01:33:54.575504 kubelet[2575]: E1213 01:33:54.575457 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:55.577968 kubelet[2575]: E1213 01:33:55.577927 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:56.967804 containerd[1464]: time="2024-12-13T01:33:56.967742656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:56.968763 containerd[1464]: time="2024-12-13T01:33:56.968714941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:33:56.969911 containerd[1464]: time="2024-12-13T01:33:56.969880069Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:56.972506 containerd[1464]: time="2024-12-13T01:33:56.972473099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:56.973273 containerd[1464]: time="2024-12-13T01:33:56.973238777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.540564601s" Dec 13 01:33:56.973337 containerd[1464]: time="2024-12-13T01:33:56.973273011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:33:56.974217 containerd[1464]: time="2024-12-13T01:33:56.974190885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:33:56.975540 containerd[1464]: time="2024-12-13T01:33:56.975436694Z" level=info msg="CreateContainer within sandbox \"bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:33:56.992341 containerd[1464]: time="2024-12-13T01:33:56.992260195Z" level=info msg="CreateContainer within sandbox \"bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cddae596787298355ec07d5846a031b7f7c859ab389a996e4f5f05dc54c55988\"" Dec 13 01:33:56.993767 containerd[1464]: time="2024-12-13T01:33:56.993718223Z" level=info msg="StartContainer for \"cddae596787298355ec07d5846a031b7f7c859ab389a996e4f5f05dc54c55988\"" Dec 13 01:33:57.025987 systemd[1]: Started cri-containerd-cddae596787298355ec07d5846a031b7f7c859ab389a996e4f5f05dc54c55988.scope - libcontainer container cddae596787298355ec07d5846a031b7f7c859ab389a996e4f5f05dc54c55988. Dec 13 01:33:57.077763 containerd[1464]: time="2024-12-13T01:33:57.077706735Z" level=info msg="StartContainer for \"cddae596787298355ec07d5846a031b7f7c859ab389a996e4f5f05dc54c55988\" returns successfully" Dec 13 01:33:57.340655 containerd[1464]: time="2024-12-13T01:33:57.340512905Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:57.342526 kubelet[2575]: E1213 01:33:57.342122 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:57.342990 containerd[1464]: time="2024-12-13T01:33:57.342207886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:33:57.345940 containerd[1464]: time="2024-12-13T01:33:57.345823586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 371.601703ms" Dec 13 01:33:57.345940 containerd[1464]: time="2024-12-13T01:33:57.345904889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:33:57.347596 containerd[1464]: time="2024-12-13T01:33:57.347138735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:33:57.348373 containerd[1464]: time="2024-12-13T01:33:57.348331886Z" level=info msg="CreateContainer within sandbox \"9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:33:57.372539 containerd[1464]: time="2024-12-13T01:33:57.372396791Z" level=info msg="CreateContainer within sandbox \"9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fdc4b35f54b2da8c1b98342df822fca32bea5bc0cbc924498f0b7b02f068bbdc\"" Dec 13 01:33:57.373626 containerd[1464]: time="2024-12-13T01:33:57.373561759Z" level=info msg="StartContainer for \"fdc4b35f54b2da8c1b98342df822fca32bea5bc0cbc924498f0b7b02f068bbdc\"" Dec 13 01:33:57.420918 systemd[1]: Started cri-containerd-fdc4b35f54b2da8c1b98342df822fca32bea5bc0cbc924498f0b7b02f068bbdc.scope - libcontainer container fdc4b35f54b2da8c1b98342df822fca32bea5bc0cbc924498f0b7b02f068bbdc. Dec 13 01:33:57.732310 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:38924.service - OpenSSH per-connection server daemon (10.0.0.1:38924). Dec 13 01:33:58.021653 containerd[1464]: time="2024-12-13T01:33:58.021486573Z" level=info msg="StartContainer for \"fdc4b35f54b2da8c1b98342df822fca32bea5bc0cbc924498f0b7b02f068bbdc\" returns successfully" Dec 13 01:33:58.044024 kubelet[2575]: E1213 01:33:58.043987 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:58.059146 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 38924 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:33:58.061012 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:58.071256 kubelet[2575]: I1213 01:33:58.068304 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7759d578c8-fj2gm" podStartSLOduration=28.046264058 podStartE2EDuration="34.068280319s" podCreationTimestamp="2024-12-13 01:33:24 +0000 UTC" firstStartedPulling="2024-12-13 01:33:50.952055159 +0000 UTC m=+47.645717366" lastFinishedPulling="2024-12-13 01:33:56.974071421 +0000 UTC m=+53.667733627" observedRunningTime="2024-12-13 01:33:58.052000364 +0000 UTC m=+54.745662570" watchObservedRunningTime="2024-12-13 01:33:58.068280319 +0000 UTC m=+54.761942525" Dec 13 01:33:58.071733 systemd-logind[1445]: New session 14 of user core. Dec 13 01:33:58.074182 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:33:58.220922 sshd[5275]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:58.227258 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:33:58.227524 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:38924.service: Deactivated successfully. Dec 13 01:33:58.229630 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:33:58.230724 systemd-logind[1445]: Removed session 14. Dec 13 01:33:58.773265 kubelet[2575]: I1213 01:33:58.772810 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7759d578c8-pbvdn" podStartSLOduration=30.031337159 podStartE2EDuration="34.772787618s" podCreationTimestamp="2024-12-13 01:33:24 +0000 UTC" firstStartedPulling="2024-12-13 01:33:52.605282915 +0000 UTC m=+49.298945121" lastFinishedPulling="2024-12-13 01:33:57.346733374 +0000 UTC m=+54.040395580" observedRunningTime="2024-12-13 01:33:58.069018445 +0000 UTC m=+54.762680651" watchObservedRunningTime="2024-12-13 01:33:58.772787618 +0000 UTC m=+55.466449824" Dec 13 01:33:59.045471 kubelet[2575]: I1213 01:33:59.045206 2575 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:59.769227 containerd[1464]: time="2024-12-13T01:33:59.769156577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:59.770311 containerd[1464]: time="2024-12-13T01:33:59.770267412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:33:59.771775 containerd[1464]: time="2024-12-13T01:33:59.771730439Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:59.774227 containerd[1464]: time="2024-12-13T01:33:59.774157205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:59.774970 containerd[1464]: time="2024-12-13T01:33:59.774927641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.427754722s" Dec 13 01:33:59.774970 containerd[1464]: time="2024-12-13T01:33:59.774962868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:33:59.777214 containerd[1464]: time="2024-12-13T01:33:59.777176252Z" level=info msg="CreateContainer within sandbox \"180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:33:59.795244 containerd[1464]: time="2024-12-13T01:33:59.795186003Z" level=info msg="CreateContainer within sandbox \"180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"adb3899abd5224f1ea4034406f971271f9b11863080c35a8d85dc54996f875cb\"" Dec 13 01:33:59.795725 containerd[1464]: time="2024-12-13T01:33:59.795703375Z" level=info msg="StartContainer for \"adb3899abd5224f1ea4034406f971271f9b11863080c35a8d85dc54996f875cb\"" Dec 13 01:33:59.829102 systemd[1]: Started cri-containerd-adb3899abd5224f1ea4034406f971271f9b11863080c35a8d85dc54996f875cb.scope - libcontainer container adb3899abd5224f1ea4034406f971271f9b11863080c35a8d85dc54996f875cb. Dec 13 01:33:59.870124 containerd[1464]: time="2024-12-13T01:33:59.870006710Z" level=info msg="StartContainer for \"adb3899abd5224f1ea4034406f971271f9b11863080c35a8d85dc54996f875cb\" returns successfully" Dec 13 01:34:00.058849 kubelet[2575]: I1213 01:34:00.058566 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mp76n" podStartSLOduration=27.194221536 podStartE2EDuration="36.058546363s" podCreationTimestamp="2024-12-13 01:33:24 +0000 UTC" firstStartedPulling="2024-12-13 01:33:50.911555973 +0000 UTC m=+47.605218179" lastFinishedPulling="2024-12-13 01:33:59.77588079 +0000 UTC m=+56.469543006" observedRunningTime="2024-12-13 01:34:00.057637136 +0000 UTC m=+56.751299342" watchObservedRunningTime="2024-12-13 01:34:00.058546363 +0000 UTC m=+56.752208569" Dec 13 01:34:00.465433 kubelet[2575]: I1213 01:34:00.465387 2575 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:34:00.465433 kubelet[2575]: I1213 01:34:00.465419 2575 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:34:01.297170 kubelet[2575]: I1213 01:34:01.297118 2575 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:34:03.233927 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:38934.service - OpenSSH per-connection server daemon (10.0.0.1:38934). Dec 13 01:34:03.277019 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 38934 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:03.279029 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:03.283394 systemd-logind[1445]: New session 15 of user core. Dec 13 01:34:03.297035 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:34:03.384115 containerd[1464]: time="2024-12-13T01:34:03.384061325Z" level=info msg="StopPodSandbox for \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\"" Dec 13 01:34:03.466527 sshd[5353]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.436 [WARNING][5379] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0", GenerateName:"calico-apiserver-7759d578c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f95fdeb-1056-4a6f-ba9e-df8029b239a1", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7759d578c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032", Pod:"calico-apiserver-7759d578c8-pbvdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8e5ef7d9a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.436 [INFO][5379] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.436 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" iface="eth0" netns="" Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.436 [INFO][5379] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.436 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.456 [INFO][5391] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.456 [INFO][5391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.456 [INFO][5391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.462 [WARNING][5391] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.462 [INFO][5391] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.464 [INFO][5391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:03.469979 containerd[1464]: 2024-12-13 01:34:03.466 [INFO][5379] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:34:03.470948 containerd[1464]: time="2024-12-13T01:34:03.470006267Z" level=info msg="TearDown network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\" successfully" Dec 13 01:34:03.470948 containerd[1464]: time="2024-12-13T01:34:03.470030542Z" level=info msg="StopPodSandbox for \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\" returns successfully" Dec 13 01:34:03.470948 containerd[1464]: time="2024-12-13T01:34:03.470586566Z" level=info msg="RemovePodSandbox for \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\"" Dec 13 01:34:03.470770 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:38934.service: Deactivated successfully. Dec 13 01:34:03.473453 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:34:03.473614 containerd[1464]: time="2024-12-13T01:34:03.473574624Z" level=info msg="Forcibly stopping sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\"" Dec 13 01:34:03.474355 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:34:03.475320 systemd-logind[1445]: Removed session 15. Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.509 [WARNING][5416] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0", GenerateName:"calico-apiserver-7759d578c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f95fdeb-1056-4a6f-ba9e-df8029b239a1", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7759d578c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9444007f6abf2c9a5ea35945ff7a4bdf31d2cbf6b8384570f43e3bcc8e12a032", Pod:"calico-apiserver-7759d578c8-pbvdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8e5ef7d9a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.509 [INFO][5416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.509 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" iface="eth0" netns="" Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.509 [INFO][5416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.509 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.528 [INFO][5423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.528 [INFO][5423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.528 [INFO][5423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.533 [WARNING][5423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.533 [INFO][5423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" HandleID="k8s-pod-network.bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Workload="localhost-k8s-calico--apiserver--7759d578c8--pbvdn-eth0" Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.534 [INFO][5423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:03.538623 containerd[1464]: 2024-12-13 01:34:03.536 [INFO][5416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590" Dec 13 01:34:03.538623 containerd[1464]: time="2024-12-13T01:34:03.538580685Z" level=info msg="TearDown network for sandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\" successfully" Dec 13 01:34:03.700232 containerd[1464]: time="2024-12-13T01:34:03.700160306Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:03.700408 containerd[1464]: time="2024-12-13T01:34:03.700272436Z" level=info msg="RemovePodSandbox \"bd695135eb75067031f70380b5dc79fb10f995131cc3212fe75d50744bb6b590\" returns successfully" Dec 13 01:34:03.700899 containerd[1464]: time="2024-12-13T01:34:03.700869838Z" level=info msg="StopPodSandbox for \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\"" Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.736 [WARNING][5449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"892b10bf-3a8d-4c3d-8649-291377d9695e", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41", Pod:"coredns-7db6d8ff4d-fmn8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab096e9d26b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.737 [INFO][5449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.737 [INFO][5449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" iface="eth0" netns="" Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.737 [INFO][5449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.737 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.762 [INFO][5457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.763 [INFO][5457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.763 [INFO][5457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.770 [WARNING][5457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.770 [INFO][5457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.772 [INFO][5457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:03.777529 containerd[1464]: 2024-12-13 01:34:03.775 [INFO][5449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:34:03.778282 containerd[1464]: time="2024-12-13T01:34:03.777583393Z" level=info msg="TearDown network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\" successfully" Dec 13 01:34:03.778282 containerd[1464]: time="2024-12-13T01:34:03.777615363Z" level=info msg="StopPodSandbox for \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\" returns successfully" Dec 13 01:34:03.778282 containerd[1464]: time="2024-12-13T01:34:03.778117375Z" level=info msg="RemovePodSandbox for \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\"" Dec 13 01:34:03.778282 containerd[1464]: time="2024-12-13T01:34:03.778149305Z" level=info msg="Forcibly stopping sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\"" Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.815 [WARNING][5479] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"892b10bf-3a8d-4c3d-8649-291377d9695e", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbb54bcce38140ef59307e9c6ba367093f4c3b75463c93e49e0fcf9383bdcd41", Pod:"coredns-7db6d8ff4d-fmn8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab096e9d26b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.815 [INFO][5479] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.815 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" iface="eth0" netns="" Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.815 [INFO][5479] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.815 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.842 [INFO][5486] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.842 [INFO][5486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.842 [INFO][5486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.847 [WARNING][5486] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.847 [INFO][5486] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" HandleID="k8s-pod-network.1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Workload="localhost-k8s-coredns--7db6d8ff4d--fmn8h-eth0" Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.849 [INFO][5486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:03.854525 containerd[1464]: 2024-12-13 01:34:03.851 [INFO][5479] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde" Dec 13 01:34:03.854525 containerd[1464]: time="2024-12-13T01:34:03.854498255Z" level=info msg="TearDown network for sandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\" successfully" Dec 13 01:34:03.885347 containerd[1464]: time="2024-12-13T01:34:03.885269961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:03.885347 containerd[1464]: time="2024-12-13T01:34:03.885345813Z" level=info msg="RemovePodSandbox \"1fe4ab4453b92bbad077e0536bcdc66abda3e72fbd8d724bfc368b3a9b817dde\" returns successfully" Dec 13 01:34:03.885800 containerd[1464]: time="2024-12-13T01:34:03.885778265Z" level=info msg="StopPodSandbox for \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\"" Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.052 [WARNING][5508] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0", GenerateName:"calico-kube-controllers-5b567484f5-", Namespace:"calico-system", SelfLink:"", UID:"04d42515-4a5b-418f-b47e-c07fd5f34d8b", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b567484f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189", Pod:"calico-kube-controllers-5b567484f5-slbhn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53e602bf64d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.052 [INFO][5508] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.052 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" iface="eth0" netns="" Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.052 [INFO][5508] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.052 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.086 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.087 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.087 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.091 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.091 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.092 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:04.097457 containerd[1464]: 2024-12-13 01:34:04.094 [INFO][5508] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:34:04.098029 containerd[1464]: time="2024-12-13T01:34:04.097547368Z" level=info msg="TearDown network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\" successfully" Dec 13 01:34:04.098029 containerd[1464]: time="2024-12-13T01:34:04.097605027Z" level=info msg="StopPodSandbox for \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\" returns successfully" Dec 13 01:34:04.098091 containerd[1464]: time="2024-12-13T01:34:04.098062906Z" level=info msg="RemovePodSandbox for \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\"" Dec 13 01:34:04.098148 containerd[1464]: time="2024-12-13T01:34:04.098098232Z" level=info msg="Forcibly stopping sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\"" Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.133 [WARNING][5539] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0", GenerateName:"calico-kube-controllers-5b567484f5-", Namespace:"calico-system", SelfLink:"", UID:"04d42515-4a5b-418f-b47e-c07fd5f34d8b", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b567484f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eeef3ed92ce61196258cfdf008559cad7875ae647b6a8c84e16f30f972564189", Pod:"calico-kube-controllers-5b567484f5-slbhn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53e602bf64d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.133 [INFO][5539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.133 [INFO][5539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" iface="eth0" netns="" Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.133 [INFO][5539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.133 [INFO][5539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.158 [INFO][5546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.158 [INFO][5546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.158 [INFO][5546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.163 [WARNING][5546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.163 [INFO][5546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" HandleID="k8s-pod-network.5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Workload="localhost-k8s-calico--kube--controllers--5b567484f5--slbhn-eth0" Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.164 [INFO][5546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:04.169675 containerd[1464]: 2024-12-13 01:34:04.167 [INFO][5539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf" Dec 13 01:34:04.170162 containerd[1464]: time="2024-12-13T01:34:04.169721667Z" level=info msg="TearDown network for sandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\" successfully" Dec 13 01:34:04.233262 containerd[1464]: time="2024-12-13T01:34:04.233188773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:04.233427 containerd[1464]: time="2024-12-13T01:34:04.233274283Z" level=info msg="RemovePodSandbox \"5e3cd295c71796e4ddee99d164d46053ee53f084ab54d035f120815f79ad81cf\" returns successfully" Dec 13 01:34:04.233774 containerd[1464]: time="2024-12-13T01:34:04.233746951Z" level=info msg="StopPodSandbox for \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\"" Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.266 [WARNING][5569] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mp76n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2aa326b4-c51a-4e10-93c9-213b40c6cdc7", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739", Pod:"csi-node-driver-mp76n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali74465e011c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.267 [INFO][5569] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.267 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" iface="eth0" netns="" Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.267 [INFO][5569] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.267 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.286 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.286 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.286 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.290 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.290 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.291 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:04.297352 containerd[1464]: 2024-12-13 01:34:04.294 [INFO][5569] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:34:04.297920 containerd[1464]: time="2024-12-13T01:34:04.297394736Z" level=info msg="TearDown network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\" successfully" Dec 13 01:34:04.297920 containerd[1464]: time="2024-12-13T01:34:04.297422408Z" level=info msg="StopPodSandbox for \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\" returns successfully" Dec 13 01:34:04.297920 containerd[1464]: time="2024-12-13T01:34:04.297898752Z" level=info msg="RemovePodSandbox for \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\"" Dec 13 01:34:04.298061 containerd[1464]: time="2024-12-13T01:34:04.297929370Z" level=info msg="Forcibly stopping sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\"" Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.328 [WARNING][5599] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mp76n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2aa326b4-c51a-4e10-93c9-213b40c6cdc7", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"180e409dcfd31a1889afae5b8ea4408328be803bccd9d66dbc22bb8c888f1739", Pod:"csi-node-driver-mp76n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali74465e011c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.328 [INFO][5599] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.328 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" iface="eth0" netns="" Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.328 [INFO][5599] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.328 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.347 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.347 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.347 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.351 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.352 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" HandleID="k8s-pod-network.2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Workload="localhost-k8s-csi--node--driver--mp76n-eth0" Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.353 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:04.357608 containerd[1464]: 2024-12-13 01:34:04.355 [INFO][5599] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba" Dec 13 01:34:04.358710 containerd[1464]: time="2024-12-13T01:34:04.357649062Z" level=info msg="TearDown network for sandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\" successfully" Dec 13 01:34:04.386747 containerd[1464]: time="2024-12-13T01:34:04.386694766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:04.387194 containerd[1464]: time="2024-12-13T01:34:04.386776540Z" level=info msg="RemovePodSandbox \"2f608bb8bcac0ab94d55f8133589a54dbe0fee832d39eb5aada3181e45e24dba\" returns successfully" Dec 13 01:34:04.387194 containerd[1464]: time="2024-12-13T01:34:04.387178093Z" level=info msg="StopPodSandbox for \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\"" Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.422 [WARNING][5628] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ede8f753-82f0-4f13-acfc-752baf14716b", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd", Pod:"coredns-7db6d8ff4d-68jq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif43aff931e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.422 [INFO][5628] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.422 [INFO][5628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" iface="eth0" netns="" Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.422 [INFO][5628] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.422 [INFO][5628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.445 [INFO][5637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.445 [INFO][5637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.445 [INFO][5637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.469 [WARNING][5637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.469 [INFO][5637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.470 [INFO][5637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:04.474974 containerd[1464]: 2024-12-13 01:34:04.472 [INFO][5628] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:34:04.475564 containerd[1464]: time="2024-12-13T01:34:04.474963460Z" level=info msg="TearDown network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\" successfully" Dec 13 01:34:04.475564 containerd[1464]: time="2024-12-13T01:34:04.474996001Z" level=info msg="StopPodSandbox for \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\" returns successfully" Dec 13 01:34:04.475564 containerd[1464]: time="2024-12-13T01:34:04.475550141Z" level=info msg="RemovePodSandbox for \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\"" Dec 13 01:34:04.475658 containerd[1464]: time="2024-12-13T01:34:04.475575449Z" level=info msg="Forcibly stopping sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\"" Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.576 [WARNING][5659] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ede8f753-82f0-4f13-acfc-752baf14716b", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"369e7b20659f7d2a90b7033925edc276667644b5e75144ca8b81c3384c60a4cd", Pod:"coredns-7db6d8ff4d-68jq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif43aff931e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.576 [INFO][5659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.576 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" iface="eth0" netns="" Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.576 [INFO][5659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.576 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.596 [INFO][5666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.596 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.596 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.600 [WARNING][5666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.600 [INFO][5666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" HandleID="k8s-pod-network.ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Workload="localhost-k8s-coredns--7db6d8ff4d--68jq8-eth0" Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.602 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:04.606362 containerd[1464]: 2024-12-13 01:34:04.604 [INFO][5659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c" Dec 13 01:34:04.606890 containerd[1464]: time="2024-12-13T01:34:04.606399202Z" level=info msg="TearDown network for sandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\" successfully" Dec 13 01:34:04.708043 containerd[1464]: time="2024-12-13T01:34:04.707968869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:04.708043 containerd[1464]: time="2024-12-13T01:34:04.708046556Z" level=info msg="RemovePodSandbox \"ce2bf7e25db751b1b75375fedcbb45582c6d58bb653762f7d8b35e5691772b0c\" returns successfully" Dec 13 01:34:04.708504 containerd[1464]: time="2024-12-13T01:34:04.708469709Z" level=info msg="StopPodSandbox for \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\"" Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.773 [WARNING][5688] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0", GenerateName:"calico-apiserver-7759d578c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a651f28-b895-466b-a4fa-253090684670", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7759d578c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b", Pod:"calico-apiserver-7759d578c8-fj2gm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f7e593337b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.773 [INFO][5688] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.773 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" iface="eth0" netns="" Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.773 [INFO][5688] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.773 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.796 [INFO][5695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.796 [INFO][5695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.796 [INFO][5695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.801 [WARNING][5695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.801 [INFO][5695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.802 [INFO][5695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:04.807699 containerd[1464]: 2024-12-13 01:34:04.805 [INFO][5688] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:34:04.807699 containerd[1464]: time="2024-12-13T01:34:04.807684868Z" level=info msg="TearDown network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\" successfully" Dec 13 01:34:04.808311 containerd[1464]: time="2024-12-13T01:34:04.807715585Z" level=info msg="StopPodSandbox for \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\" returns successfully" Dec 13 01:34:04.808357 containerd[1464]: time="2024-12-13T01:34:04.808320481Z" level=info msg="RemovePodSandbox for \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\"" Dec 13 01:34:04.808503 containerd[1464]: time="2024-12-13T01:34:04.808447799Z" level=info msg="Forcibly stopping sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\"" Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.844 [WARNING][5718] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0", GenerateName:"calico-apiserver-7759d578c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a651f28-b895-466b-a4fa-253090684670", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7759d578c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf052946ab336220b6bb19a7b425148fa035a0f5b48939744fddc5a54647720b", Pod:"calico-apiserver-7759d578c8-fj2gm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f7e593337b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.844 [INFO][5718] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.845 [INFO][5718] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" iface="eth0" netns="" Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.845 [INFO][5718] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.845 [INFO][5718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.868 [INFO][5725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.868 [INFO][5725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.868 [INFO][5725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.884 [WARNING][5725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.884 [INFO][5725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" HandleID="k8s-pod-network.95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Workload="localhost-k8s-calico--apiserver--7759d578c8--fj2gm-eth0" Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.885 [INFO][5725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:04.890965 containerd[1464]: 2024-12-13 01:34:04.888 [INFO][5718] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15" Dec 13 01:34:04.891633 containerd[1464]: time="2024-12-13T01:34:04.891567009Z" level=info msg="TearDown network for sandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\" successfully" Dec 13 01:34:04.921236 containerd[1464]: time="2024-12-13T01:34:04.921143950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:04.921383 containerd[1464]: time="2024-12-13T01:34:04.921253356Z" level=info msg="RemovePodSandbox \"95a73869a97f66729033f977ee6d714d6ba082b73765fbc0a7541d3fc3109d15\" returns successfully" Dec 13 01:34:08.480062 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:43030.service - OpenSSH per-connection server daemon (10.0.0.1:43030). Dec 13 01:34:08.519734 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 43030 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:08.521991 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:08.526616 systemd-logind[1445]: New session 16 of user core. Dec 13 01:34:08.538174 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:34:08.656261 sshd[5753]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:08.661112 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:43030.service: Deactivated successfully. Dec 13 01:34:08.663796 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:34:08.664458 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:34:08.665466 systemd-logind[1445]: Removed session 16. Dec 13 01:34:13.675182 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:43044.service - OpenSSH per-connection server daemon (10.0.0.1:43044). Dec 13 01:34:13.718458 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 43044 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:13.720253 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:13.724429 systemd-logind[1445]: New session 17 of user core. Dec 13 01:34:13.730971 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:34:13.869749 sshd[5775]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:13.875548 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:43044.service: Deactivated successfully. Dec 13 01:34:13.878415 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:34:13.879202 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:34:13.880664 systemd-logind[1445]: Removed session 17. Dec 13 01:34:18.879340 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:45138.service - OpenSSH per-connection server daemon (10.0.0.1:45138). Dec 13 01:34:18.916555 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 45138 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:18.918396 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:18.922483 systemd-logind[1445]: New session 18 of user core. Dec 13 01:34:18.931008 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:34:19.049006 sshd[5791]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:19.060658 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:45138.service: Deactivated successfully. Dec 13 01:34:19.062917 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:34:19.064743 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:34:19.074387 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:45140.service - OpenSSH per-connection server daemon (10.0.0.1:45140). Dec 13 01:34:19.075374 systemd-logind[1445]: Removed session 18. Dec 13 01:34:19.102457 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 45140 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:19.104098 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:19.108152 systemd-logind[1445]: New session 19 of user core. Dec 13 01:34:19.118967 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:34:19.661945 sshd[5805]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:19.677053 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:45140.service: Deactivated successfully. Dec 13 01:34:19.680630 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:34:19.685371 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:34:19.694769 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:45156.service - OpenSSH per-connection server daemon (10.0.0.1:45156). Dec 13 01:34:19.695987 systemd-logind[1445]: Removed session 19. Dec 13 01:34:19.729868 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 45156 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:19.732325 sshd[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:19.738606 systemd-logind[1445]: New session 20 of user core. Dec 13 01:34:19.752134 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:34:21.390207 kubelet[2575]: E1213 01:34:21.390157 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:22.103769 sshd[5817]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:22.113181 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:45156.service: Deactivated successfully. Dec 13 01:34:22.115350 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:34:22.116443 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:34:22.124130 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:45170.service - OpenSSH per-connection server daemon (10.0.0.1:45170). Dec 13 01:34:22.125000 systemd-logind[1445]: Removed session 20. Dec 13 01:34:22.156827 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 45170 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:22.158421 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:22.162877 systemd-logind[1445]: New session 21 of user core. Dec 13 01:34:22.169976 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:34:22.487664 sshd[5841]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:22.505010 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:45170.service: Deactivated successfully. Dec 13 01:34:22.506936 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:34:22.508420 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:34:22.514120 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:45186.service - OpenSSH per-connection server daemon (10.0.0.1:45186). Dec 13 01:34:22.515960 systemd-logind[1445]: Removed session 21. Dec 13 01:34:22.546086 sshd[5853]: Accepted publickey for core from 10.0.0.1 port 45186 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:22.547826 sshd[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:22.551947 systemd-logind[1445]: New session 22 of user core. Dec 13 01:34:22.570017 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:34:22.673185 sshd[5853]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:22.677356 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:45186.service: Deactivated successfully. Dec 13 01:34:22.679501 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:34:22.680322 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:34:22.681226 systemd-logind[1445]: Removed session 22. Dec 13 01:34:24.388919 kubelet[2575]: E1213 01:34:24.388860 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:27.686755 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:56550.service - OpenSSH per-connection server daemon (10.0.0.1:56550). Dec 13 01:34:27.719437 sshd[5889]: Accepted publickey for core from 10.0.0.1 port 56550 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:27.720962 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:27.724958 systemd-logind[1445]: New session 23 of user core. Dec 13 01:34:27.732966 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:34:27.839996 sshd[5889]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:27.844579 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:56550.service: Deactivated successfully. Dec 13 01:34:27.847527 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:34:27.848276 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:34:27.849379 systemd-logind[1445]: Removed session 23. Dec 13 01:34:32.858082 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:56560.service - OpenSSH per-connection server daemon (10.0.0.1:56560). Dec 13 01:34:32.889371 sshd[5913]: Accepted publickey for core from 10.0.0.1 port 56560 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:32.891108 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:32.895657 systemd-logind[1445]: New session 24 of user core. Dec 13 01:34:32.904987 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:34:33.011048 sshd[5913]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:33.015278 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:56560.service: Deactivated successfully. Dec 13 01:34:33.017613 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:34:33.018316 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:34:33.019502 systemd-logind[1445]: Removed session 24. Dec 13 01:34:37.389149 kubelet[2575]: E1213 01:34:37.389088 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:38.025326 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:42078.service - OpenSSH per-connection server daemon (10.0.0.1:42078). Dec 13 01:34:38.059872 sshd[5951]: Accepted publickey for core from 10.0.0.1 port 42078 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:38.061603 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:38.066194 systemd-logind[1445]: New session 25 of user core. Dec 13 01:34:38.075023 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:34:38.183489 sshd[5951]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:38.187771 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:42078.service: Deactivated successfully. Dec 13 01:34:38.190505 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:34:38.191273 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:34:38.192268 systemd-logind[1445]: Removed session 25. Dec 13 01:34:43.205967 systemd[1]: Started sshd@25-10.0.0.83:22-10.0.0.1:42088.service - OpenSSH per-connection server daemon (10.0.0.1:42088). Dec 13 01:34:43.240751 sshd[5966]: Accepted publickey for core from 10.0.0.1 port 42088 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:43.242523 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:43.247100 systemd-logind[1445]: New session 26 of user core. Dec 13 01:34:43.255066 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:34:43.373594 sshd[5966]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:43.379298 systemd[1]: sshd@25-10.0.0.83:22-10.0.0.1:42088.service: Deactivated successfully. Dec 13 01:34:43.381506 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:34:43.382541 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:34:43.383949 systemd-logind[1445]: Removed session 26. Dec 13 01:34:44.388184 kubelet[2575]: E1213 01:34:44.388139 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:48.387534 systemd[1]: Started sshd@26-10.0.0.83:22-10.0.0.1:43196.service - OpenSSH per-connection server daemon (10.0.0.1:43196). Dec 13 01:34:48.427808 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 43196 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:48.429691 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:48.433843 systemd-logind[1445]: New session 27 of user core. Dec 13 01:34:48.442096 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:34:48.556293 sshd[5980]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:48.561586 systemd[1]: sshd@26-10.0.0.83:22-10.0.0.1:43196.service: Deactivated successfully. Dec 13 01:34:48.564630 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:34:48.565432 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:34:48.566441 systemd-logind[1445]: Removed session 27.