Dec 13 01:26:22.907451 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:26:22.907471 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:22.907482 kernel: BIOS-provided physical RAM map: Dec 13 01:26:22.907489 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:26:22.907495 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:26:22.907501 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:26:22.907508 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:26:22.907514 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:26:22.907521 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:26:22.907527 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:26:22.907536 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:26:22.907542 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:26:22.907548 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:26:22.907554 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:26:22.907562 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:26:22.907568 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:26:22.907578 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:26:22.907584 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:26:22.907591 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:26:22.907597 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:26:22.907604 kernel: NX (Execute Disable) protection: active Dec 13 01:26:22.907610 kernel: APIC: Static calls initialized Dec 13 01:26:22.907617 kernel: efi: EFI v2.7 by EDK II Dec 13 01:26:22.907623 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:26:22.907630 kernel: SMBIOS 2.8 present. Dec 13 01:26:22.907636 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:26:22.907643 kernel: Hypervisor detected: KVM Dec 13 01:26:22.907652 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:26:22.907659 kernel: kvm-clock: using sched offset of 4663756561 cycles Dec 13 01:26:22.907675 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:26:22.907683 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:26:22.907690 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:26:22.907697 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:26:22.907704 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:26:22.907711 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:26:22.907718 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:26:22.907727 kernel: Using GB pages for direct mapping Dec 13 01:26:22.907734 kernel: Secure boot disabled Dec 13 01:26:22.907741 kernel: ACPI: Early table checksum verification disabled Dec 13 01:26:22.907759 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:26:22.907770 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:26:22.907777 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:22.907784 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:22.907794 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:26:22.907801 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:22.907808 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:22.907816 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:22.907823 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:26:22.907830 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:26:22.907837 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:26:22.907850 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:26:22.907857 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:26:22.907864 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:26:22.907871 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:26:22.907878 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:26:22.907885 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:26:22.907892 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:26:22.907900 kernel: No NUMA configuration found Dec 13 01:26:22.907907 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:26:22.907916 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:26:22.907924 kernel: Zone ranges: Dec 13 01:26:22.907931 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:26:22.907938 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:26:22.907945 kernel: Normal empty Dec 13 01:26:22.907952 kernel: Movable zone start for each node Dec 13 01:26:22.907959 kernel: Early memory node ranges Dec 13 01:26:22.907966 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:26:22.907973 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:26:22.907981 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:26:22.907990 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:26:22.907997 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:26:22.908004 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:26:22.908011 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:26:22.908018 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:26:22.908026 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:26:22.908033 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:26:22.908040 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:26:22.908047 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:26:22.908057 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:26:22.908064 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:26:22.908072 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:26:22.908079 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:26:22.908086 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:26:22.908093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:26:22.908101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:26:22.908108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:26:22.908115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:26:22.908125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:26:22.908132 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:26:22.908139 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:26:22.908147 kernel: TSC deadline timer available Dec 13 01:26:22.908154 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:26:22.908161 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:26:22.908169 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:26:22.908176 kernel: kvm-guest: setup PV sched yield Dec 13 01:26:22.908183 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:26:22.908193 kernel: Booting paravirtualized kernel on KVM Dec 13 01:26:22.908200 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:26:22.908208 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:26:22.908215 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:26:22.908222 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:26:22.908229 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:26:22.908236 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:26:22.908244 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:26:22.908252 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:22.908262 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:26:22.908269 kernel: random: crng init done Dec 13 01:26:22.908276 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:26:22.908284 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:26:22.908291 kernel: Fallback order for Node 0: 0 Dec 13 01:26:22.908298 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:26:22.908305 kernel: Policy zone: DMA32 Dec 13 01:26:22.908313 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:26:22.908320 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Dec 13 01:26:22.908330 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:26:22.908337 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:26:22.908344 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:26:22.908352 kernel: Dynamic Preempt: voluntary Dec 13 01:26:22.908366 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:26:22.908376 kernel: rcu: RCU event tracing is enabled. Dec 13 01:26:22.908384 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:26:22.908392 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:26:22.908400 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:26:22.908408 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:26:22.908415 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:26:22.908425 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:26:22.908433 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:26:22.908441 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:26:22.908448 kernel: Console: colour dummy device 80x25 Dec 13 01:26:22.908456 kernel: printk: console [ttyS0] enabled Dec 13 01:26:22.908465 kernel: ACPI: Core revision 20230628 Dec 13 01:26:22.908474 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:26:22.908481 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:26:22.908489 kernel: x2apic enabled Dec 13 01:26:22.908497 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:26:22.908504 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:26:22.908512 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:26:22.908520 kernel: kvm-guest: setup PV IPIs Dec 13 01:26:22.908527 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:26:22.908537 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:26:22.908545 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:26:22.908555 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:26:22.908563 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:26:22.908570 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:26:22.908580 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:26:22.908588 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:26:22.908597 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:26:22.908606 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:26:22.908616 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:26:22.908623 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:26:22.908631 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:26:22.908639 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:26:22.908647 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:26:22.908655 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:26:22.908669 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:26:22.908677 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:26:22.908687 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:26:22.908695 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:26:22.908702 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:26:22.908710 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:26:22.908718 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:26:22.908725 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:26:22.908733 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:26:22.908741 kernel: landlock: Up and running. Dec 13 01:26:22.908806 kernel: SELinux: Initializing. Dec 13 01:26:22.908814 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:26:22.908825 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:26:22.908832 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:26:22.908840 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:26:22.908848 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:26:22.908856 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:26:22.908864 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:26:22.908871 kernel: ... version: 0 Dec 13 01:26:22.908879 kernel: ... bit width: 48 Dec 13 01:26:22.908889 kernel: ... generic registers: 6 Dec 13 01:26:22.908896 kernel: ... value mask: 0000ffffffffffff Dec 13 01:26:22.908904 kernel: ... max period: 00007fffffffffff Dec 13 01:26:22.908912 kernel: ... fixed-purpose events: 0 Dec 13 01:26:22.908919 kernel: ... event mask: 000000000000003f Dec 13 01:26:22.908927 kernel: signal: max sigframe size: 1776 Dec 13 01:26:22.908935 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:26:22.908943 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:26:22.908950 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:26:22.908960 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:26:22.908968 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:26:22.908976 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:26:22.908983 kernel: smpboot: Max logical packages: 1 Dec 13 01:26:22.908991 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:26:22.908998 kernel: devtmpfs: initialized Dec 13 01:26:22.909006 kernel: x86/mm: Memory block size: 128MB Dec 13 01:26:22.909014 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:26:22.909022 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:26:22.909029 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:26:22.909039 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:26:22.909047 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:26:22.909055 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:26:22.909062 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:26:22.909070 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:26:22.909077 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:26:22.909085 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:26:22.909093 kernel: audit: type=2000 audit(1734053182.351:1): state=initialized audit_enabled=0 res=1 Dec 13 01:26:22.909105 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:26:22.909113 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:26:22.909120 kernel: cpuidle: using governor menu Dec 13 01:26:22.909128 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:26:22.909136 kernel: dca service started, version 1.12.1 Dec 13 01:26:22.909143 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:26:22.909151 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:26:22.909159 kernel: PCI: Using configuration type 1 for base access Dec 13 01:26:22.909166 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:26:22.909176 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:26:22.909184 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:26:22.909191 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:26:22.909199 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:26:22.909207 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:26:22.909214 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:26:22.909222 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:26:22.909230 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:26:22.909237 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:26:22.909247 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:26:22.909254 kernel: ACPI: Interpreter enabled Dec 13 01:26:22.909262 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:26:22.909269 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:26:22.909277 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:26:22.909285 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:26:22.909292 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:26:22.909300 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:26:22.909493 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:26:22.909631 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:26:22.909773 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:26:22.909784 kernel: PCI host bridge to bus 0000:00 Dec 13 01:26:22.909920 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:26:22.910033 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:26:22.910143 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:26:22.910258 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:26:22.910367 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:26:22.910476 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:26:22.910584 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:26:22.910766 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:26:22.910905 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:26:22.911032 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:26:22.911157 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:26:22.911276 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:26:22.911397 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:26:22.911516 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:26:22.911684 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:26:22.911823 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:26:22.911945 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:26:22.912070 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:26:22.912208 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:26:22.912332 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:26:22.912451 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:26:22.912571 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:26:22.912717 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:26:22.912855 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:26:22.912976 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:26:22.913096 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:26:22.913224 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:26:22.913361 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:26:22.913484 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:26:22.913615 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:26:22.913760 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:26:22.913884 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:26:22.914021 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:26:22.914143 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:26:22.914153 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:26:22.914161 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:26:22.914168 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:26:22.914176 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:26:22.914188 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:26:22.914196 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:26:22.914203 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:26:22.914211 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:26:22.914218 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:26:22.914226 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:26:22.914234 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:26:22.914241 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:26:22.914249 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:26:22.914259 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:26:22.914266 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:26:22.914274 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:26:22.914281 kernel: iommu: Default domain type: Translated Dec 13 01:26:22.914289 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:26:22.914297 kernel: efivars: Registered efivars operations Dec 13 01:26:22.914304 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:26:22.914312 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:26:22.914319 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:26:22.914330 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:26:22.914337 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:26:22.914345 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:26:22.914466 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:26:22.914587 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:26:22.914720 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:26:22.914730 kernel: vgaarb: loaded Dec 13 01:26:22.914738 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:26:22.914761 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:26:22.914768 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:26:22.914776 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:26:22.914783 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:26:22.914791 kernel: pnp: PnP ACPI init Dec 13 01:26:22.914948 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:26:22.914960 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:26:22.914968 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:26:22.914976 kernel: NET: Registered PF_INET protocol family Dec 13 01:26:22.914987 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:26:22.914995 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:26:22.915003 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:26:22.915010 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:26:22.915018 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:26:22.915026 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:26:22.915034 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:26:22.915041 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:26:22.915051 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:26:22.915059 kernel: NET: Registered PF_XDP protocol family Dec 13 01:26:22.915181 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:26:22.915302 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:26:22.915419 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:26:22.915530 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:26:22.915640 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:26:22.915774 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:26:22.915891 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:26:22.916000 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:26:22.916011 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:26:22.916019 kernel: Initialise system trusted keyrings Dec 13 01:26:22.916026 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:26:22.916034 kernel: Key type asymmetric registered Dec 13 01:26:22.916042 kernel: Asymmetric key parser 'x509' registered Dec 13 01:26:22.916049 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:26:22.916057 kernel: io scheduler mq-deadline registered Dec 13 01:26:22.916068 kernel: io scheduler kyber registered Dec 13 01:26:22.916076 kernel: io scheduler bfq registered Dec 13 01:26:22.916083 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:26:22.916091 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:26:22.916099 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:26:22.916107 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:26:22.916115 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:26:22.916122 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:26:22.916130 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:26:22.916140 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:26:22.916147 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:26:22.916299 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:26:22.916311 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:26:22.916423 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:26:22.916543 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:26:22 UTC (1734053182) Dec 13 01:26:22.916656 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:26:22.916676 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:26:22.916687 kernel: efifb: probing for efifb Dec 13 01:26:22.916695 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:26:22.916702 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:26:22.916710 kernel: efifb: scrolling: redraw Dec 13 01:26:22.916717 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:26:22.916725 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:26:22.916764 kernel: fb0: EFI VGA frame buffer device Dec 13 01:26:22.916774 kernel: pstore: Using crash dump compression: deflate Dec 13 01:26:22.916782 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:26:22.916795 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:26:22.916802 kernel: Segment Routing with IPv6 Dec 13 01:26:22.916810 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:26:22.916818 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:26:22.916826 kernel: Key type dns_resolver registered Dec 13 01:26:22.916833 kernel: IPI shorthand broadcast: enabled Dec 13 01:26:22.916841 kernel: sched_clock: Marking stable (795002369, 121748854)->(941463770, -24712547) Dec 13 01:26:22.916849 kernel: registered taskstats version 1 Dec 13 01:26:22.916857 kernel: Loading compiled-in X.509 certificates Dec 13 01:26:22.916868 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:26:22.916876 kernel: Key type .fscrypt registered Dec 13 01:26:22.916883 kernel: Key type fscrypt-provisioning registered Dec 13 01:26:22.916891 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:26:22.916899 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:26:22.916907 kernel: ima: No architecture policies found Dec 13 01:26:22.916915 kernel: clk: Disabling unused clocks Dec 13 01:26:22.916923 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:26:22.916931 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:26:22.916941 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:26:22.916949 kernel: Run /init as init process Dec 13 01:26:22.916957 kernel: with arguments: Dec 13 01:26:22.916965 kernel: /init Dec 13 01:26:22.916973 kernel: with environment: Dec 13 01:26:22.916981 kernel: HOME=/ Dec 13 01:26:22.916988 kernel: TERM=linux Dec 13 01:26:22.916996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:26:22.917007 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:22.917019 systemd[1]: Detected virtualization kvm. Dec 13 01:26:22.917028 systemd[1]: Detected architecture x86-64. Dec 13 01:26:22.917037 systemd[1]: Running in initrd. Dec 13 01:26:22.917047 systemd[1]: No hostname configured, using default hostname. Dec 13 01:26:22.917058 systemd[1]: Hostname set to . Dec 13 01:26:22.917066 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:26:22.917075 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:26:22.917083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:22.917092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:22.917101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:26:22.917110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:22.917118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:26:22.917130 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:26:22.917140 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:26:22.917148 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:26:22.917157 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:22.917165 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:22.917174 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:22.917185 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:22.917193 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:22.917202 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:22.917210 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:22.917219 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:22.917227 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:26:22.917236 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:26:22.917244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:22.917253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:22.917264 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:22.917272 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:22.917281 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:26:22.917289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:22.917298 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:26:22.917306 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:26:22.917315 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:22.917323 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:22.917331 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:22.917342 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:22.917351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:22.917359 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:26:22.917368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:22.917398 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:26:22.917418 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:22.917427 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:22.917435 systemd-journald[193]: Journal started Dec 13 01:26:22.917456 systemd-journald[193]: Runtime Journal (/run/log/journal/687ea7842ec54b0397c2eb5100a0cf3b) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:26:22.920780 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:22.921261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:22.925624 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:22.929474 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:22.929741 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:26:22.933285 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:22.941845 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:22.951242 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:22.961765 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:26:22.963423 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:26:22.964354 kernel: Bridge firewalling registered Dec 13 01:26:22.968870 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:26:22.970180 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:22.972574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:22.980053 dracut-cmdline[220]: dracut-dracut-053 Dec 13 01:26:22.982654 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:26:22.989393 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:22.996932 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:23.030482 systemd-resolved[245]: Positive Trust Anchors: Dec 13 01:26:23.030501 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:23.030541 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:23.033044 systemd-resolved[245]: Defaulting to hostname 'linux'. Dec 13 01:26:23.034127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:23.039493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:23.084778 kernel: SCSI subsystem initialized Dec 13 01:26:23.093767 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:26:23.104777 kernel: iscsi: registered transport (tcp) Dec 13 01:26:23.124830 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:26:23.124853 kernel: QLogic iSCSI HBA Driver Dec 13 01:26:23.171279 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:23.177899 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:26:23.204531 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:26:23.204583 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:26:23.204597 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:26:23.245768 kernel: raid6: avx2x4 gen() 27020 MB/s Dec 13 01:26:23.262770 kernel: raid6: avx2x2 gen() 28331 MB/s Dec 13 01:26:23.279852 kernel: raid6: avx2x1 gen() 25190 MB/s Dec 13 01:26:23.279869 kernel: raid6: using algorithm avx2x2 gen() 28331 MB/s Dec 13 01:26:23.297863 kernel: raid6: .... xor() 19966 MB/s, rmw enabled Dec 13 01:26:23.297881 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:26:23.318788 kernel: xor: automatically using best checksumming function avx Dec 13 01:26:23.471787 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:26:23.484862 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:23.491905 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:23.510890 systemd-udevd[412]: Using default interface naming scheme 'v255'. Dec 13 01:26:23.516065 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:23.528233 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:26:23.540250 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Dec 13 01:26:23.570832 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:23.583931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:23.646971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:23.651890 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:26:23.664174 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:23.666584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:23.669377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:23.670728 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:23.678767 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:26:23.699862 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:26:23.700008 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:26:23.700020 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:26:23.700031 kernel: GPT:9289727 != 19775487 Dec 13 01:26:23.700047 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:26:23.700057 kernel: GPT:9289727 != 19775487 Dec 13 01:26:23.700067 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:26:23.700077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:26:23.680956 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:26:23.701960 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:23.704607 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:23.704887 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:23.711106 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:23.712523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:23.715301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:23.720627 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:23.724896 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:26:23.724932 kernel: AES CTR mode by8 optimization enabled Dec 13 01:26:23.724943 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Dec 13 01:26:23.725989 kernel: libata version 3.00 loaded. Dec 13 01:26:23.730766 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (458) Dec 13 01:26:23.733077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:23.739473 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:26:23.749947 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:26:23.749961 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:26:23.750109 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:26:23.750252 kernel: scsi host0: ahci Dec 13 01:26:23.750424 kernel: scsi host1: ahci Dec 13 01:26:23.750568 kernel: scsi host2: ahci Dec 13 01:26:23.751661 kernel: scsi host3: ahci Dec 13 01:26:23.751850 kernel: scsi host4: ahci Dec 13 01:26:23.751997 kernel: scsi host5: ahci Dec 13 01:26:23.752137 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:26:23.752149 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:26:23.752159 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:26:23.752170 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:26:23.752183 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:26:23.752193 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:26:23.749485 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:26:23.766922 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:26:23.774966 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:26:23.780997 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:26:23.783557 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:26:23.798927 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:26:23.799191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:23.799255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:23.801619 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:23.804287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:23.815976 disk-uuid[556]: Primary Header is updated. Dec 13 01:26:23.815976 disk-uuid[556]: Secondary Entries is updated. Dec 13 01:26:23.815976 disk-uuid[556]: Secondary Header is updated. Dec 13 01:26:23.820246 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:26:23.820885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:23.826772 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:26:23.828890 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:26:23.855617 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:24.064373 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:24.064447 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:26:24.064473 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:24.064484 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:24.064494 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:24.065776 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:26:24.065793 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:26:24.066775 kernel: ata3.00: applying bridge limits Dec 13 01:26:24.067773 kernel: ata3.00: configured for UDMA/100 Dec 13 01:26:24.067830 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:26:24.119781 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:26:24.141804 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:26:24.141818 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:26:24.827734 disk-uuid[558]: The operation has completed successfully. Dec 13 01:26:24.829057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:26:24.848141 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:26:24.848264 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:26:24.877876 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:26:24.880765 sh[596]: Success Dec 13 01:26:24.892813 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:26:24.922208 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:26:24.944303 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:26:24.948634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:26:24.957940 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:26:24.957964 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:24.957975 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:26:24.958965 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:26:24.960324 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:26:24.963952 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:26:24.966281 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:26:24.979914 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:26:24.981573 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:26:24.990016 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:24.990040 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:24.990050 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:26:24.992781 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:26:25.001235 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:26:25.002980 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:25.011886 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:26:25.018955 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:26:25.069543 ignition[688]: Ignition 2.19.0 Dec 13 01:26:25.069553 ignition[688]: Stage: fetch-offline Dec 13 01:26:25.069589 ignition[688]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:25.069599 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:25.069701 ignition[688]: parsed url from cmdline: "" Dec 13 01:26:25.069705 ignition[688]: no config URL provided Dec 13 01:26:25.069710 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:26:25.069719 ignition[688]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:26:25.069768 ignition[688]: op(1): [started] loading QEMU firmware config module Dec 13 01:26:25.069774 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:26:25.080316 ignition[688]: op(1): [finished] loading QEMU firmware config module Dec 13 01:26:25.100207 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:25.110939 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:25.122932 ignition[688]: parsing config with SHA512: dd3bb8dcee9c589b5fc2fe1f0314dfe3a5270a084a28bfd81ed6b5c628f4fc785bf30a0fb4b8ce2e686355892fd87d192a52f2088208adae7acd1e28cd529911 Dec 13 01:26:25.127164 unknown[688]: fetched base config from "system" Dec 13 01:26:25.127187 unknown[688]: fetched user config from "qemu" Dec 13 01:26:25.130625 ignition[688]: fetch-offline: fetch-offline passed Dec 13 01:26:25.130793 ignition[688]: Ignition finished successfully Dec 13 01:26:25.132642 systemd-networkd[785]: lo: Link UP Dec 13 01:26:25.132651 systemd-networkd[785]: lo: Gained carrier Dec 13 01:26:25.134202 systemd-networkd[785]: Enumeration completed Dec 13 01:26:25.134567 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:25.134570 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:25.136292 systemd-networkd[785]: eth0: Link UP Dec 13 01:26:25.136296 systemd-networkd[785]: eth0: Gained carrier Dec 13 01:26:25.136303 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:25.142729 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:25.145071 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:25.147524 systemd[1]: Reached target network.target - Network. Dec 13 01:26:25.149247 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:26:25.164783 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:26:25.166195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:26:25.178791 ignition[788]: Ignition 2.19.0 Dec 13 01:26:25.178801 ignition[788]: Stage: kargs Dec 13 01:26:25.178952 ignition[788]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:25.178970 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:25.179703 ignition[788]: kargs: kargs passed Dec 13 01:26:25.179758 ignition[788]: Ignition finished successfully Dec 13 01:26:25.184775 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:26:25.192981 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:26:25.204707 ignition[797]: Ignition 2.19.0 Dec 13 01:26:25.204719 ignition[797]: Stage: disks Dec 13 01:26:25.204888 ignition[797]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:25.204899 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:25.207759 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:26:25.205670 ignition[797]: disks: disks passed Dec 13 01:26:25.209308 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:25.205714 ignition[797]: Ignition finished successfully Dec 13 01:26:25.211185 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:26:25.213046 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:25.215109 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:25.215528 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:25.226916 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:26:25.238531 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:26:25.244904 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:26:25.263855 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:26:25.347771 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:26:25.348423 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:26:25.350578 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:25.372824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:25.374512 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:26:25.375692 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:26:25.375729 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:26:25.386778 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Dec 13 01:26:25.386802 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:25.386814 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:25.386824 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:26:25.375765 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:25.390068 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:26:25.381849 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:26:25.387524 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:26:25.391704 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:25.412437 systemd-resolved[245]: Detected conflict on linux IN A 10.0.0.34 Dec 13 01:26:25.412453 systemd-resolved[245]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Dec 13 01:26:25.422561 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:26:25.427486 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:26:25.432190 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:26:25.435318 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:26:25.516126 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:25.523876 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:26:25.525472 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:26:25.531770 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:25.547570 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:26:25.552974 ignition[930]: INFO : Ignition 2.19.0 Dec 13 01:26:25.552974 ignition[930]: INFO : Stage: mount Dec 13 01:26:25.554855 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:25.554855 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:25.554855 ignition[930]: INFO : mount: mount passed Dec 13 01:26:25.554855 ignition[930]: INFO : Ignition finished successfully Dec 13 01:26:25.560969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:26:25.570878 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:26:25.957278 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:26:25.967086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:25.974161 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Dec 13 01:26:25.974195 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:26:25.974207 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:26:25.975042 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:26:25.977800 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:26:25.979562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:26.001921 ignition[958]: INFO : Ignition 2.19.0 Dec 13 01:26:26.001921 ignition[958]: INFO : Stage: files Dec 13 01:26:26.003733 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:26.003733 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:26.006332 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:26:26.007685 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:26:26.007685 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:26:26.011265 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:26:26.012765 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:26:26.014473 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 01:26:26.015651 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:26:26.017891 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:26:26.019812 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:26:26.060690 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:26:26.186815 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:26:26.189088 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:26:26.538135 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:26:26.815885 systemd-networkd[785]: eth0: Gained IPv6LL Dec 13 01:26:27.124188 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:26:27.124188 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:26:27.127883 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:27.129999 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:27.129999 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:26:27.129999 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:26:27.134297 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:26:27.136222 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:26:27.136222 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:26:27.139334 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:26:27.160952 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:26:27.166725 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:26:27.168424 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:26:27.168424 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:27.171303 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:27.172794 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:27.174599 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:27.176276 ignition[958]: INFO : files: files passed Dec 13 01:26:27.177049 ignition[958]: INFO : Ignition finished successfully Dec 13 01:26:27.180095 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:26:27.193963 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:26:27.196911 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:26:27.199523 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:26:27.200517 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:26:27.206899 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:26:27.210739 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:27.210739 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:27.215044 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:27.213140 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:27.215239 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:26:27.226929 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:26:27.252065 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:26:27.252200 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:26:27.252937 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:26:27.253221 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:26:27.253589 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:26:27.259146 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:26:27.277334 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:27.288881 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:26:27.299590 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:27.300151 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:27.302349 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:26:27.304600 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:26:27.304743 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:27.308962 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:26:27.310136 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:26:27.312023 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:26:27.314141 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:27.316365 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:27.318617 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:26:27.320707 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:27.322990 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:26:27.325229 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:26:27.327274 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:26:27.329108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:26:27.329271 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:27.331187 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:27.332964 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:27.335156 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:26:27.335266 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:27.337357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:26:27.337489 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:27.339686 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:26:27.339811 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:27.341584 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:26:27.343649 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:26:27.343777 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:27.345861 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:26:27.347767 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:26:27.349651 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:26:27.349758 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:27.351764 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:26:27.351884 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:27.353557 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:26:27.353669 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:27.355827 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:26:27.355968 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:26:27.371878 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:26:27.372935 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:26:27.374493 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:26:27.374615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:27.376798 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:26:27.376995 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:27.381927 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:26:27.382194 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:26:27.386151 ignition[1014]: INFO : Ignition 2.19.0 Dec 13 01:26:27.386151 ignition[1014]: INFO : Stage: umount Dec 13 01:26:27.387756 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:27.387756 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:26:27.390688 ignition[1014]: INFO : umount: umount passed Dec 13 01:26:27.391463 ignition[1014]: INFO : Ignition finished successfully Dec 13 01:26:27.393505 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:26:27.393634 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:26:27.395626 systemd[1]: Stopped target network.target - Network. Dec 13 01:26:27.397251 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:26:27.397310 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:26:27.399052 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:26:27.399099 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:26:27.401109 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:26:27.401154 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:26:27.403119 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:26:27.403164 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:27.405281 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:26:27.407271 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:26:27.409780 systemd-networkd[785]: eth0: DHCPv6 lease lost Dec 13 01:26:27.410114 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:26:27.412273 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:26:27.412393 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:26:27.414570 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:26:27.414610 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:27.428883 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:27.429896 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:27.429961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:27.432215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:27.436346 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:26:27.436473 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:27.440966 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:27.441066 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:27.442983 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:27.443031 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:27.444204 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:27.444251 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:27.451134 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:27.451311 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:27.453428 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:27.453499 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:27.454670 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:27.454713 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:27.456929 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:27.456979 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:27.459177 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:27.459231 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:27.461358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:27.461406 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:27.472871 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:27.473976 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:27.474029 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:27.476371 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:26:27.476422 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:27.478641 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:27.478690 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:27.481116 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:27.481164 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:27.483710 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:27.483835 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:27.486030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:27.486132 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:27.594134 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:26:27.594288 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:26:27.595691 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:27.597095 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:26:27.597155 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:27.611996 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:27.620487 systemd[1]: Switching root. Dec 13 01:26:27.648139 systemd-journald[193]: Journal stopped Dec 13 01:26:28.849826 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:28.849906 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:26:28.849920 kernel: SELinux: policy capability open_perms=1 Dec 13 01:26:28.849931 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:26:28.849943 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:26:28.849954 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:26:28.849965 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:26:28.849976 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:26:28.849991 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:26:28.850005 kernel: audit: type=1403 audit(1734053188.089:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:26:28.850017 systemd[1]: Successfully loaded SELinux policy in 45.839ms. Dec 13 01:26:28.850048 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.100ms. Dec 13 01:26:28.850060 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:28.850073 systemd[1]: Detected virtualization kvm. Dec 13 01:26:28.850085 systemd[1]: Detected architecture x86-64. Dec 13 01:26:28.850096 systemd[1]: Detected first boot. Dec 13 01:26:28.850108 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:26:28.850120 zram_generator::config[1059]: No configuration found. Dec 13 01:26:28.850140 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:26:28.850151 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:26:28.850163 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:26:28.850175 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:28.850187 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:26:28.850200 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:26:28.850211 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:26:28.850223 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:26:28.850237 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:26:28.850249 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:26:28.850261 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:26:28.850272 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:26:28.850284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:28.850296 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:28.850309 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:26:28.850320 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:26:28.850332 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:26:28.850347 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:28.850359 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:26:28.850370 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:28.850382 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:26:28.850394 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:26:28.850406 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:28.850417 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:26:28.850431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:28.850443 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:28.850455 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:28.850467 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:28.850479 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:26:28.850491 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:26:28.850509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:28.850522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:28.850534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:28.850545 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:26:28.850560 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:26:28.850577 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:26:28.850589 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:26:28.850601 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:28.850613 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:26:28.850625 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:26:28.850636 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:26:28.850648 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:26:28.850663 systemd[1]: Reached target machines.target - Containers. Dec 13 01:26:28.850674 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:26:28.850686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:28.850698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:28.850710 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:26:28.850722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:28.850735 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:28.850762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:28.850775 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:26:28.850790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:28.850802 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:26:28.850814 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:26:28.850826 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:26:28.850837 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:26:28.850850 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:26:28.850862 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:28.850873 kernel: fuse: init (API version 7.39) Dec 13 01:26:28.850885 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:28.850899 kernel: loop: module loaded Dec 13 01:26:28.850911 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:26:28.850923 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:26:28.850935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:28.850963 systemd-journald[1129]: Collecting audit messages is disabled. Dec 13 01:26:28.850984 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:26:28.850997 systemd[1]: Stopped verity-setup.service. Dec 13 01:26:28.851011 systemd-journald[1129]: Journal started Dec 13 01:26:28.851031 systemd-journald[1129]: Runtime Journal (/run/log/journal/687ea7842ec54b0397c2eb5100a0cf3b) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:26:28.620443 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:26:28.644637 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:26:28.645120 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:26:28.854764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:28.864769 kernel: ACPI: bus type drm_connector registered Dec 13 01:26:28.869779 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:28.870552 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:26:28.871881 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:26:28.873136 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:26:28.874242 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:26:28.875495 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:26:28.876743 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:26:28.878023 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:26:28.879507 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:28.881080 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:26:28.881258 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:26:28.882769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:28.882944 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:28.884413 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:28.884593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:28.885979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:28.886151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:28.887670 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:26:28.887851 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:26:28.889246 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:28.889416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:28.890862 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:28.892272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:26:28.893822 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:26:28.907377 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:26:28.916838 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:26:28.919298 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:26:28.920471 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:26:28.920510 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:28.922577 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:26:28.925041 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:26:28.930011 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:26:28.931338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:28.933907 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:26:28.938064 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:26:28.939371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:28.942886 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:26:28.944057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:28.945116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:28.953693 systemd-journald[1129]: Time spent on flushing to /var/log/journal/687ea7842ec54b0397c2eb5100a0cf3b is 26.993ms for 996 entries. Dec 13 01:26:28.953693 systemd-journald[1129]: System Journal (/var/log/journal/687ea7842ec54b0397c2eb5100a0cf3b) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:26:29.000705 systemd-journald[1129]: Received client request to flush runtime journal. Dec 13 01:26:28.950100 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:26:28.952433 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:28.956649 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:26:28.958014 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:26:28.959528 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:26:28.982664 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 01:26:28.982677 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 01:26:28.988729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:28.990406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:28.997971 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:26:29.000970 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:26:29.003462 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:26:29.019051 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:26:29.019794 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:26:29.020727 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:26:29.037066 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:26:29.039401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:29.045861 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:26:29.056342 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:26:29.063873 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:26:29.067823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:29.070735 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:26:29.072377 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:26:29.088271 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Dec 13 01:26:29.088702 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Dec 13 01:26:29.092777 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:26:29.094509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:29.162800 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 01:26:29.193788 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:26:29.225777 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:26:29.238797 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:26:29.244598 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:26:29.245543 (sd-merge)[1202]: Merged extensions into '/usr'. Dec 13 01:26:29.252605 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:26:29.252630 systemd[1]: Reloading... Dec 13 01:26:29.334780 zram_generator::config[1227]: No configuration found. Dec 13 01:26:29.445080 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:26:29.470007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:29.528308 systemd[1]: Reloading finished in 275 ms. Dec 13 01:26:29.594074 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:26:29.595925 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:26:29.620425 systemd[1]: Starting ensure-sysext.service... Dec 13 01:26:29.623874 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:29.628640 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:26:29.628658 systemd[1]: Reloading... Dec 13 01:26:29.656982 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:26:29.657349 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:26:29.658344 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:26:29.658659 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Dec 13 01:26:29.658740 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Dec 13 01:26:29.664340 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:29.664351 systemd-tmpfiles[1266]: Skipping /boot Dec 13 01:26:29.680670 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:29.683446 systemd-tmpfiles[1266]: Skipping /boot Dec 13 01:26:29.694783 zram_generator::config[1295]: No configuration found. Dec 13 01:26:29.840646 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:29.898137 systemd[1]: Reloading finished in 268 ms. Dec 13 01:26:29.918595 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:26:29.931405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:29.940717 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:29.944048 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:26:29.946940 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:26:29.952816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:29.957303 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:29.961949 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:26:29.966951 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:29.967124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:29.975153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:29.978316 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:29.986683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:29.988054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:29.990537 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:26:29.991627 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:29.992845 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:26:29.994600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:29.994822 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:29.996666 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:29.997179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:29.999521 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:29.999740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:30.000508 augenrules[1356]: No rules Dec 13 01:26:30.002174 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:30.005064 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Dec 13 01:26:30.016378 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:30.016617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:30.028528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:30.033057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:30.036643 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:30.037808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:30.040452 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:26:30.041539 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:30.042716 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:30.046863 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:26:30.048724 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:26:30.051833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:30.052039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:30.054430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:30.054717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:30.057422 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:30.057644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:30.065134 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:26:30.074718 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:26:30.076781 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1380) Dec 13 01:26:30.079100 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1380) Dec 13 01:26:30.096516 systemd[1]: Finished ensure-sysext.service. Dec 13 01:26:30.102180 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:26:30.104461 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:30.104615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:30.107785 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1390) Dec 13 01:26:30.112954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:30.115988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:30.148938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:30.154848 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:30.156096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:30.158905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:30.162953 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:26:30.165266 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:26:30.165301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:26:30.166001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:30.166211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:30.167732 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:30.167948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:30.169502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:30.169864 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:30.171502 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:30.171678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:30.177078 systemd-resolved[1335]: Positive Trust Anchors: Dec 13 01:26:30.177090 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:30.177122 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:30.183014 systemd-resolved[1335]: Defaulting to hostname 'linux'. Dec 13 01:26:30.266832 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:26:30.266799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:30.269786 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:26:30.273296 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:26:30.273491 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:26:30.273701 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:26:30.276714 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:26:30.279018 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:26:30.277568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:26:30.283608 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:30.297009 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:26:30.298646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:30.298718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:30.311919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:30.348138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:30.348650 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:30.357235 systemd-networkd[1410]: lo: Link UP Dec 13 01:26:30.357247 systemd-networkd[1410]: lo: Gained carrier Dec 13 01:26:30.365142 systemd-networkd[1410]: Enumeration completed Dec 13 01:26:30.375016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:30.376400 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:30.378148 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:26:30.379961 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:30.379966 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:30.380571 systemd[1]: Reached target network.target - Network. Dec 13 01:26:30.382010 systemd-networkd[1410]: eth0: Link UP Dec 13 01:26:30.382015 systemd-networkd[1410]: eth0: Gained carrier Dec 13 01:26:30.382029 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:30.384036 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:26:30.391508 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:26:30.393867 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:26:30.428822 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:26:30.430788 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Dec 13 01:26:31.053244 systemd-resolved[1335]: Clock change detected. Flushing caches. Dec 13 01:26:31.053407 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:26:31.053505 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:26:31.053604 systemd-timesyncd[1411]: Initial clock synchronization to Fri 2024-12-13 01:26:31.053140 UTC. Dec 13 01:26:31.064820 kernel: kvm_amd: TSC scaling supported Dec 13 01:26:31.064854 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:26:31.064867 kernel: kvm_amd: Nested Paging enabled Dec 13 01:26:31.065888 kernel: kvm_amd: LBR virtualization supported Dec 13 01:26:31.065915 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:26:31.066579 kernel: kvm_amd: Virtual GIF supported Dec 13 01:26:31.087331 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:26:31.097776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:31.117225 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:26:31.130543 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:26:31.140559 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:31.200933 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:26:31.202734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:31.204074 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:31.205416 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:26:31.206878 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:26:31.208580 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:26:31.209960 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:26:31.211443 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:26:31.212904 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:26:31.212928 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:31.214009 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:31.216140 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:26:31.219167 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:26:31.231803 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:26:31.234139 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:26:31.235737 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:26:31.236945 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:31.237924 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:31.238904 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:31.238933 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:31.240011 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:26:31.242137 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:26:31.245366 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:31.246421 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:26:31.250569 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:26:31.251775 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:26:31.255461 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:26:31.257517 jq[1447]: false Dec 13 01:26:31.259437 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:26:31.264132 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:26:31.267404 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:26:31.271934 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:26:31.273553 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:26:31.274016 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:26:31.275567 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:26:31.278832 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:26:31.282789 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:26:31.288023 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:26:31.288261 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:26:31.291102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:26:31.292376 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:26:31.304091 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:26:31.306153 jq[1458]: true Dec 13 01:26:31.306558 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:26:31.306806 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:26:31.318107 extend-filesystems[1448]: Found loop3 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found loop4 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found loop5 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found sr0 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found vda Dec 13 01:26:31.322879 extend-filesystems[1448]: Found vda1 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found vda2 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found vda3 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found usr Dec 13 01:26:31.322879 extend-filesystems[1448]: Found vda4 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found vda6 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found vda7 Dec 13 01:26:31.322879 extend-filesystems[1448]: Found vda9 Dec 13 01:26:31.322879 extend-filesystems[1448]: Checking size of /dev/vda9 Dec 13 01:26:31.322652 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:26:31.344541 update_engine[1456]: I20241213 01:26:31.318821 1456 main.cc:92] Flatcar Update Engine starting Dec 13 01:26:31.344541 update_engine[1456]: I20241213 01:26:31.326342 1456 update_check_scheduler.cc:74] Next update check in 3m42s Dec 13 01:26:31.320211 dbus-daemon[1446]: [system] SELinux support is enabled Dec 13 01:26:31.346159 tar[1464]: linux-amd64/helm Dec 13 01:26:31.335938 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:26:31.346475 jq[1472]: true Dec 13 01:26:31.341215 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:26:31.341242 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:26:31.342886 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:26:31.342900 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:26:31.357571 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:26:31.378285 extend-filesystems[1448]: Resized partition /dev/vda9 Dec 13 01:26:31.381883 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:26:31.384446 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:26:31.384472 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:26:31.387405 systemd-logind[1455]: New seat seat0. Dec 13 01:26:31.389756 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:26:31.433318 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1377) Dec 13 01:26:31.458108 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:26:31.561828 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:26:31.574937 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:26:31.593229 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:26:31.593229 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:26:31.593229 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:26:31.597457 extend-filesystems[1448]: Resized filesystem in /dev/vda9 Dec 13 01:26:31.599036 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:26:31.598737 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:26:31.599434 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:26:31.598976 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:26:31.600819 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:26:31.604387 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:26:31.620647 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:26:31.631678 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:26:31.639555 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:26:31.639797 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:26:31.648537 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:26:31.682592 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:26:31.694720 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:26:31.698433 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:26:31.700133 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:26:31.801671 containerd[1466]: time="2024-12-13T01:26:31.801535359Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:26:31.827611 containerd[1466]: time="2024-12-13T01:26:31.827508337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:31.830432 containerd[1466]: time="2024-12-13T01:26:31.830386615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:31.830432 containerd[1466]: time="2024-12-13T01:26:31.830414748Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:26:31.830432 containerd[1466]: time="2024-12-13T01:26:31.830431259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:26:31.830680 containerd[1466]: time="2024-12-13T01:26:31.830658755Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:26:31.830702 containerd[1466]: time="2024-12-13T01:26:31.830681308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:31.830785 containerd[1466]: time="2024-12-13T01:26:31.830762740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:31.830785 containerd[1466]: time="2024-12-13T01:26:31.830779812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:31.831031 containerd[1466]: time="2024-12-13T01:26:31.831006147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:31.831031 containerd[1466]: time="2024-12-13T01:26:31.831023900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:31.831082 containerd[1466]: time="2024-12-13T01:26:31.831037606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:31.831082 containerd[1466]: time="2024-12-13T01:26:31.831047514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:31.831172 containerd[1466]: time="2024-12-13T01:26:31.831150918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:31.831447 containerd[1466]: time="2024-12-13T01:26:31.831424301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:31.831573 containerd[1466]: time="2024-12-13T01:26:31.831550758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:31.831573 containerd[1466]: time="2024-12-13T01:26:31.831567840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:26:31.831713 containerd[1466]: time="2024-12-13T01:26:31.831692674Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:26:31.831781 containerd[1466]: time="2024-12-13T01:26:31.831759740Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:26:31.837937 containerd[1466]: time="2024-12-13T01:26:31.837903812Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:26:31.838028 containerd[1466]: time="2024-12-13T01:26:31.837972271Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:26:31.838028 containerd[1466]: time="2024-12-13T01:26:31.838002397Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:26:31.838028 containerd[1466]: time="2024-12-13T01:26:31.838017526Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:26:31.838098 containerd[1466]: time="2024-12-13T01:26:31.838032985Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:26:31.838217 containerd[1466]: time="2024-12-13T01:26:31.838193997Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:26:31.838505 containerd[1466]: time="2024-12-13T01:26:31.838486024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:26:31.838628 containerd[1466]: time="2024-12-13T01:26:31.838605278Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:26:31.838628 containerd[1466]: time="2024-12-13T01:26:31.838625155Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:26:31.838708 containerd[1466]: time="2024-12-13T01:26:31.838638961Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:26:31.838708 containerd[1466]: time="2024-12-13T01:26:31.838654089Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:26:31.838708 containerd[1466]: time="2024-12-13T01:26:31.838667174Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:26:31.838708 containerd[1466]: time="2024-12-13T01:26:31.838679207Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:26:31.838708 containerd[1466]: time="2024-12-13T01:26:31.838704324Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:26:31.838813 containerd[1466]: time="2024-12-13T01:26:31.838726656Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:26:31.838813 containerd[1466]: time="2024-12-13T01:26:31.838740271Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:26:31.838813 containerd[1466]: time="2024-12-13T01:26:31.838752464Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:26:31.838813 containerd[1466]: time="2024-12-13T01:26:31.838763775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:26:31.838813 containerd[1466]: time="2024-12-13T01:26:31.838792669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.838813 containerd[1466]: time="2024-12-13T01:26:31.838808439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.838920 containerd[1466]: time="2024-12-13T01:26:31.838822105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.838920 containerd[1466]: time="2024-12-13T01:26:31.838845579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.838920 containerd[1466]: time="2024-12-13T01:26:31.838859905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.838920 containerd[1466]: time="2024-12-13T01:26:31.838873291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.838920 containerd[1466]: time="2024-12-13T01:26:31.838885133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.838920 containerd[1466]: time="2024-12-13T01:26:31.838897957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.838920 containerd[1466]: time="2024-12-13T01:26:31.838912965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839059 containerd[1466]: time="2024-12-13T01:26:31.838928153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839059 containerd[1466]: time="2024-12-13T01:26:31.838958811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839059 containerd[1466]: time="2024-12-13T01:26:31.838972567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839059 containerd[1466]: time="2024-12-13T01:26:31.838992624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839059 containerd[1466]: time="2024-12-13T01:26:31.839031678Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:26:31.839059 containerd[1466]: time="2024-12-13T01:26:31.839051084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839062886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839075309Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839134921Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839152164Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839162553Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839173714Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839183051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839195264Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839205744Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:26:31.839195 containerd[1466]: time="2024-12-13T01:26:31.839215392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:26:31.839618 containerd[1466]: time="2024-12-13T01:26:31.839548807Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:26:31.839618 containerd[1466]: time="2024-12-13T01:26:31.839611886Z" level=info msg="Connect containerd service" Dec 13 01:26:31.839856 containerd[1466]: time="2024-12-13T01:26:31.839648354Z" level=info msg="using legacy CRI server" Dec 13 01:26:31.839856 containerd[1466]: time="2024-12-13T01:26:31.839656028Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:26:31.839856 containerd[1466]: time="2024-12-13T01:26:31.839778358Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:26:31.840384 containerd[1466]: time="2024-12-13T01:26:31.840359959Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:26:31.840872 containerd[1466]: time="2024-12-13T01:26:31.840578659Z" level=info msg="Start subscribing containerd event" Dec 13 01:26:31.840904 containerd[1466]: time="2024-12-13T01:26:31.840883510Z" level=info msg="Start recovering state" Dec 13 01:26:31.841637 containerd[1466]: time="2024-12-13T01:26:31.841048820Z" level=info msg="Start event monitor" Dec 13 01:26:31.841637 containerd[1466]: time="2024-12-13T01:26:31.841094887Z" level=info msg="Start snapshots syncer" Dec 13 01:26:31.841637 containerd[1466]: time="2024-12-13T01:26:31.841108382Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:26:31.841637 containerd[1466]: time="2024-12-13T01:26:31.841118802Z" level=info msg="Start streaming server" Dec 13 01:26:31.841809 containerd[1466]: time="2024-12-13T01:26:31.841772277Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:26:31.841883 containerd[1466]: time="2024-12-13T01:26:31.841860893Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:26:31.841970 containerd[1466]: time="2024-12-13T01:26:31.841946243Z" level=info msg="containerd successfully booted in 0.041642s" Dec 13 01:26:31.842582 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:26:31.921084 tar[1464]: linux-amd64/LICENSE Dec 13 01:26:31.921207 tar[1464]: linux-amd64/README.md Dec 13 01:26:31.940191 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:26:32.809559 systemd-networkd[1410]: eth0: Gained IPv6LL Dec 13 01:26:32.813419 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:26:32.815257 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:26:32.826520 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:26:32.829125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:32.831427 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:26:32.852762 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:26:32.853031 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:26:32.854731 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:26:32.858240 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:26:33.928168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:33.930100 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:26:33.931670 systemd[1]: Startup finished in 926ms (kernel) + 5.378s (initrd) + 5.269s (userspace) = 11.574s. Dec 13 01:26:33.934375 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:34.650144 kubelet[1558]: E1213 01:26:34.649973 1558 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:34.654816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:34.655017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:34.655445 systemd[1]: kubelet.service: Consumed 1.702s CPU time. Dec 13 01:26:37.522128 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:26:37.532520 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:37652.service - OpenSSH per-connection server daemon (10.0.0.1:37652). Dec 13 01:26:37.574729 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 37652 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:37.576835 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:37.586040 systemd-logind[1455]: New session 1 of user core. Dec 13 01:26:37.587345 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:26:37.597564 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:26:37.611302 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:26:37.613426 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:26:37.622373 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:26:37.736399 systemd[1576]: Queued start job for default target default.target. Dec 13 01:26:37.746689 systemd[1576]: Created slice app.slice - User Application Slice. Dec 13 01:26:37.746717 systemd[1576]: Reached target paths.target - Paths. Dec 13 01:26:37.746733 systemd[1576]: Reached target timers.target - Timers. Dec 13 01:26:37.748553 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:26:37.760679 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:26:37.760919 systemd[1576]: Reached target sockets.target - Sockets. Dec 13 01:26:37.760950 systemd[1576]: Reached target basic.target - Basic System. Dec 13 01:26:37.761024 systemd[1576]: Reached target default.target - Main User Target. Dec 13 01:26:37.761077 systemd[1576]: Startup finished in 131ms. Dec 13 01:26:37.761197 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:26:37.762754 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:26:37.829123 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:37666.service - OpenSSH per-connection server daemon (10.0.0.1:37666). Dec 13 01:26:37.867468 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 37666 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:37.868978 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:37.873515 systemd-logind[1455]: New session 2 of user core. Dec 13 01:26:37.891468 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:26:37.947266 sshd[1587]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:37.963211 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:37666.service: Deactivated successfully. Dec 13 01:26:37.965076 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:26:37.966760 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:26:37.976601 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:37678.service - OpenSSH per-connection server daemon (10.0.0.1:37678). Dec 13 01:26:37.977513 systemd-logind[1455]: Removed session 2. Dec 13 01:26:38.008391 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 37678 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:38.010008 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:38.013922 systemd-logind[1455]: New session 3 of user core. Dec 13 01:26:38.023401 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:26:38.073193 sshd[1594]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:38.085063 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:37678.service: Deactivated successfully. Dec 13 01:26:38.086853 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:26:38.088491 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:26:38.089712 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:37686.service - OpenSSH per-connection server daemon (10.0.0.1:37686). Dec 13 01:26:38.090599 systemd-logind[1455]: Removed session 3. Dec 13 01:26:38.127406 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 37686 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:38.129150 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:38.133376 systemd-logind[1455]: New session 4 of user core. Dec 13 01:26:38.153461 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:26:38.208339 sshd[1601]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:38.220156 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:37686.service: Deactivated successfully. Dec 13 01:26:38.221897 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:26:38.223471 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:26:38.224719 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:37690.service - OpenSSH per-connection server daemon (10.0.0.1:37690). Dec 13 01:26:38.225427 systemd-logind[1455]: Removed session 4. Dec 13 01:26:38.260939 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 37690 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:38.262645 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:38.266823 systemd-logind[1455]: New session 5 of user core. Dec 13 01:26:38.281421 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:26:38.339986 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:26:38.340423 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:38.362907 sudo[1611]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:38.364850 sshd[1608]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:38.382124 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:37690.service: Deactivated successfully. Dec 13 01:26:38.383943 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:26:38.385626 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:26:38.387014 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:37700.service - OpenSSH per-connection server daemon (10.0.0.1:37700). Dec 13 01:26:38.387783 systemd-logind[1455]: Removed session 5. Dec 13 01:26:38.423243 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 37700 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:38.424795 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:38.428706 systemd-logind[1455]: New session 6 of user core. Dec 13 01:26:38.444418 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:26:38.497818 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:26:38.498144 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:38.501898 sudo[1620]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:38.508368 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:26:38.508706 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:38.533522 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:38.535339 auditctl[1623]: No rules Dec 13 01:26:38.536713 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:26:38.536976 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:38.538806 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:38.570617 augenrules[1641]: No rules Dec 13 01:26:38.571739 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:38.573052 sudo[1619]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:38.574885 sshd[1616]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:38.586428 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:37700.service: Deactivated successfully. Dec 13 01:26:38.588215 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:26:38.589811 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:26:38.602722 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:37714.service - OpenSSH per-connection server daemon (10.0.0.1:37714). Dec 13 01:26:38.603877 systemd-logind[1455]: Removed session 6. Dec 13 01:26:38.634095 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 37714 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:26:38.635573 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:38.639475 systemd-logind[1455]: New session 7 of user core. Dec 13 01:26:38.649432 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:26:38.702177 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:26:38.702545 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:38.993590 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:26:38.993691 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:26:39.272809 dockerd[1671]: time="2024-12-13T01:26:39.272656457Z" level=info msg="Starting up" Dec 13 01:26:39.663647 dockerd[1671]: time="2024-12-13T01:26:39.663542078Z" level=info msg="Loading containers: start." Dec 13 01:26:39.766316 kernel: Initializing XFRM netlink socket Dec 13 01:26:39.838274 systemd-networkd[1410]: docker0: Link UP Dec 13 01:26:39.863928 dockerd[1671]: time="2024-12-13T01:26:39.863877889Z" level=info msg="Loading containers: done." Dec 13 01:26:39.880156 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3029812419-merged.mount: Deactivated successfully. Dec 13 01:26:39.881428 dockerd[1671]: time="2024-12-13T01:26:39.881383849Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:26:39.881520 dockerd[1671]: time="2024-12-13T01:26:39.881499866Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:26:39.881644 dockerd[1671]: time="2024-12-13T01:26:39.881620032Z" level=info msg="Daemon has completed initialization" Dec 13 01:26:39.918569 dockerd[1671]: time="2024-12-13T01:26:39.917594577Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:26:39.917886 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:26:40.613163 containerd[1466]: time="2024-12-13T01:26:40.613107722Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:26:41.293543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202272029.mount: Deactivated successfully. Dec 13 01:26:42.368767 containerd[1466]: time="2024-12-13T01:26:42.368679977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:42.369433 containerd[1466]: time="2024-12-13T01:26:42.369355403Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:26:42.370654 containerd[1466]: time="2024-12-13T01:26:42.370614134Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:42.373390 containerd[1466]: time="2024-12-13T01:26:42.373353070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:42.374437 containerd[1466]: time="2024-12-13T01:26:42.374404663Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.761244542s" Dec 13 01:26:42.374485 containerd[1466]: time="2024-12-13T01:26:42.374443175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:26:42.397745 containerd[1466]: time="2024-12-13T01:26:42.397700491Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:26:44.272821 containerd[1466]: time="2024-12-13T01:26:44.272745359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:44.273741 containerd[1466]: time="2024-12-13T01:26:44.273703546Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:26:44.275336 containerd[1466]: time="2024-12-13T01:26:44.275303957Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:44.278287 containerd[1466]: time="2024-12-13T01:26:44.278225697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:44.279175 containerd[1466]: time="2024-12-13T01:26:44.279144159Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.881403994s" Dec 13 01:26:44.279220 containerd[1466]: time="2024-12-13T01:26:44.279174947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:26:44.300739 containerd[1466]: time="2024-12-13T01:26:44.300696638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:26:44.905346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:44.913785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:45.106439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:45.112236 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:45.400422 kubelet[1906]: E1213 01:26:45.399955 1906 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:45.408462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:45.408703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:45.640637 containerd[1466]: time="2024-12-13T01:26:45.640565009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:45.641430 containerd[1466]: time="2024-12-13T01:26:45.641360150Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:26:45.642637 containerd[1466]: time="2024-12-13T01:26:45.642586290Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:45.645809 containerd[1466]: time="2024-12-13T01:26:45.645766444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:45.647168 containerd[1466]: time="2024-12-13T01:26:45.647117437Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.346379592s" Dec 13 01:26:45.647168 containerd[1466]: time="2024-12-13T01:26:45.647163193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:26:45.670625 containerd[1466]: time="2024-12-13T01:26:45.670512682Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:26:47.315268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount71081708.mount: Deactivated successfully. Dec 13 01:26:49.301055 containerd[1466]: time="2024-12-13T01:26:49.300935881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:49.302510 containerd[1466]: time="2024-12-13T01:26:49.302415756Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:26:49.305629 containerd[1466]: time="2024-12-13T01:26:49.305538442Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:49.310187 containerd[1466]: time="2024-12-13T01:26:49.310079509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:49.310932 containerd[1466]: time="2024-12-13T01:26:49.310866084Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 3.640291306s" Dec 13 01:26:49.311007 containerd[1466]: time="2024-12-13T01:26:49.310931106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:26:49.348623 containerd[1466]: time="2024-12-13T01:26:49.348550976Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:26:49.961578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1874193917.mount: Deactivated successfully. Dec 13 01:26:52.666973 containerd[1466]: time="2024-12-13T01:26:52.666890668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:52.667725 containerd[1466]: time="2024-12-13T01:26:52.667647026Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:26:52.668984 containerd[1466]: time="2024-12-13T01:26:52.668934380Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:52.672061 containerd[1466]: time="2024-12-13T01:26:52.672018725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:52.673350 containerd[1466]: time="2024-12-13T01:26:52.673284528Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.324679761s" Dec 13 01:26:52.673350 containerd[1466]: time="2024-12-13T01:26:52.673349290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:26:52.711056 containerd[1466]: time="2024-12-13T01:26:52.711014075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:26:53.345052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161626943.mount: Deactivated successfully. Dec 13 01:26:53.490359 containerd[1466]: time="2024-12-13T01:26:53.490270900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:53.495170 containerd[1466]: time="2024-12-13T01:26:53.495104364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:26:53.516465 containerd[1466]: time="2024-12-13T01:26:53.516403668Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:53.539025 containerd[1466]: time="2024-12-13T01:26:53.538967573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:53.539945 containerd[1466]: time="2024-12-13T01:26:53.539899230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 828.83999ms" Dec 13 01:26:53.539945 containerd[1466]: time="2024-12-13T01:26:53.539940057Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:26:53.562869 containerd[1466]: time="2024-12-13T01:26:53.562822900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:26:54.599746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2264059255.mount: Deactivated successfully. Dec 13 01:26:55.658987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:26:55.667097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:55.942281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:55.949191 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:56.110648 kubelet[2054]: E1213 01:26:56.110564 2054 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:56.116103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:56.116345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:57.316769 containerd[1466]: time="2024-12-13T01:26:57.316684190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:57.319073 containerd[1466]: time="2024-12-13T01:26:57.319017145Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:26:57.320921 containerd[1466]: time="2024-12-13T01:26:57.319749779Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:57.325415 containerd[1466]: time="2024-12-13T01:26:57.325345523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:57.326508 containerd[1466]: time="2024-12-13T01:26:57.326461366Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.763594333s" Dec 13 01:26:57.326560 containerd[1466]: time="2024-12-13T01:26:57.326513113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:27:00.862352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:00.885673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:00.966068 systemd[1]: Reloading requested from client PID 2145 ('systemctl') (unit session-7.scope)... Dec 13 01:27:00.966487 systemd[1]: Reloading... Dec 13 01:27:01.115590 zram_generator::config[2184]: No configuration found. Dec 13 01:27:01.458712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:01.570675 systemd[1]: Reloading finished in 602 ms. Dec 13 01:27:01.643893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:01.652058 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:01.661780 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:01.667224 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:01.667597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:01.684652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:01.883955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:01.891701 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:02.216886 kubelet[2234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:02.216886 kubelet[2234]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:02.216886 kubelet[2234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:02.222433 kubelet[2234]: I1213 01:27:02.222097 2234 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:03.060377 kubelet[2234]: I1213 01:27:03.060315 2234 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:03.060377 kubelet[2234]: I1213 01:27:03.060370 2234 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:03.062041 kubelet[2234]: I1213 01:27:03.060872 2234 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:03.100107 kubelet[2234]: E1213 01:27:03.100051 2234 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.101209 kubelet[2234]: I1213 01:27:03.101167 2234 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:03.127611 kubelet[2234]: I1213 01:27:03.127561 2234 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:03.129041 kubelet[2234]: I1213 01:27:03.129004 2234 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:03.129348 kubelet[2234]: I1213 01:27:03.129280 2234 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:03.129508 kubelet[2234]: I1213 01:27:03.129358 2234 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:03.129508 kubelet[2234]: I1213 01:27:03.129368 2234 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:03.129592 kubelet[2234]: I1213 01:27:03.129510 2234 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:03.129895 kubelet[2234]: I1213 01:27:03.129634 2234 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:03.129895 kubelet[2234]: I1213 01:27:03.129656 2234 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:03.129895 kubelet[2234]: I1213 01:27:03.129701 2234 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:03.129895 kubelet[2234]: I1213 01:27:03.129729 2234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:03.130282 kubelet[2234]: W1213 01:27:03.130229 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.130282 kubelet[2234]: E1213 01:27:03.130280 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.131017 kubelet[2234]: W1213 01:27:03.130611 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.131017 kubelet[2234]: E1213 01:27:03.130654 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.131453 kubelet[2234]: I1213 01:27:03.131426 2234 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:03.134510 kubelet[2234]: I1213 01:27:03.134479 2234 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:03.134601 kubelet[2234]: W1213 01:27:03.134585 2234 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:27:03.135386 kubelet[2234]: I1213 01:27:03.135369 2234 server.go:1256] "Started kubelet" Dec 13 01:27:03.136801 kubelet[2234]: I1213 01:27:03.135512 2234 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:03.136801 kubelet[2234]: I1213 01:27:03.136026 2234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:03.136801 kubelet[2234]: I1213 01:27:03.136530 2234 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:03.136801 kubelet[2234]: I1213 01:27:03.136544 2234 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:03.146504 kubelet[2234]: I1213 01:27:03.146385 2234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:03.148745 kubelet[2234]: E1213 01:27:03.148025 2234 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109838129a2d96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:27:03.13532559 +0000 UTC m=+1.236825170,LastTimestamp:2024-12-13 01:27:03.13532559 +0000 UTC m=+1.236825170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:27:03.148745 kubelet[2234]: I1213 01:27:03.148735 2234 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:03.149393 kubelet[2234]: I1213 01:27:03.149335 2234 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:03.149558 kubelet[2234]: I1213 01:27:03.149465 2234 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:03.153443 kubelet[2234]: W1213 01:27:03.152708 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.153443 kubelet[2234]: E1213 01:27:03.152798 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.153443 kubelet[2234]: E1213 01:27:03.152922 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Dec 13 01:27:03.153696 kubelet[2234]: I1213 01:27:03.153531 2234 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:03.155810 kubelet[2234]: I1213 01:27:03.155753 2234 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:03.155810 kubelet[2234]: I1213 01:27:03.155782 2234 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:03.155810 kubelet[2234]: E1213 01:27:03.155799 2234 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:03.178008 kubelet[2234]: I1213 01:27:03.177801 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:03.180507 kubelet[2234]: I1213 01:27:03.180453 2234 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:03.180507 kubelet[2234]: I1213 01:27:03.180479 2234 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:03.180507 kubelet[2234]: I1213 01:27:03.180509 2234 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:03.183449 kubelet[2234]: I1213 01:27:03.182537 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:03.183449 kubelet[2234]: I1213 01:27:03.182616 2234 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:03.183449 kubelet[2234]: I1213 01:27:03.182649 2234 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:03.183449 kubelet[2234]: E1213 01:27:03.182734 2234 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:03.183449 kubelet[2234]: W1213 01:27:03.183421 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.183705 kubelet[2234]: E1213 01:27:03.183478 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:03.252026 kubelet[2234]: I1213 01:27:03.251958 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:03.252876 kubelet[2234]: E1213 01:27:03.252768 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 13 01:27:03.283268 kubelet[2234]: E1213 01:27:03.283136 2234 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:03.311823 kubelet[2234]: I1213 01:27:03.308751 2234 policy_none.go:49] "None policy: Start" Dec 13 01:27:03.315935 kubelet[2234]: I1213 01:27:03.315885 2234 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:03.315935 kubelet[2234]: I1213 01:27:03.315942 2234 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:03.353837 kubelet[2234]: E1213 01:27:03.353745 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Dec 13 01:27:03.433126 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:27:03.453895 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:27:03.455428 kubelet[2234]: I1213 01:27:03.455251 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:03.456236 kubelet[2234]: E1213 01:27:03.456188 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 13 01:27:03.461278 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:27:03.474528 kubelet[2234]: I1213 01:27:03.474377 2234 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:03.475014 kubelet[2234]: I1213 01:27:03.474848 2234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:03.477496 kubelet[2234]: E1213 01:27:03.477091 2234 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:27:03.483804 kubelet[2234]: I1213 01:27:03.483725 2234 topology_manager.go:215] "Topology Admit Handler" podUID="0d38e2a127342d5641584882efbb35d2" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:27:03.489387 kubelet[2234]: I1213 01:27:03.489047 2234 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:27:03.491062 kubelet[2234]: I1213 01:27:03.490768 2234 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:27:03.501721 systemd[1]: Created slice kubepods-burstable-pod0d38e2a127342d5641584882efbb35d2.slice - libcontainer container kubepods-burstable-pod0d38e2a127342d5641584882efbb35d2.slice. Dec 13 01:27:03.531595 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 01:27:03.552924 kubelet[2234]: I1213 01:27:03.552854 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:03.552924 kubelet[2234]: I1213 01:27:03.552912 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:03.552924 kubelet[2234]: I1213 01:27:03.552933 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:03.553283 kubelet[2234]: I1213 01:27:03.552951 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:03.553283 kubelet[2234]: I1213 01:27:03.552976 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:03.553283 kubelet[2234]: I1213 01:27:03.552999 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:27:03.553283 kubelet[2234]: I1213 01:27:03.553016 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:03.553283 kubelet[2234]: I1213 01:27:03.553034 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:03.553451 kubelet[2234]: I1213 01:27:03.553057 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:03.555752 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 01:27:03.755635 kubelet[2234]: E1213 01:27:03.755447 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Dec 13 01:27:03.831193 kubelet[2234]: E1213 01:27:03.829511 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:03.833722 containerd[1466]: time="2024-12-13T01:27:03.833197048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d38e2a127342d5641584882efbb35d2,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:03.843232 kubelet[2234]: E1213 01:27:03.843195 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:03.843902 containerd[1466]: time="2024-12-13T01:27:03.843837923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:03.860858 kubelet[2234]: I1213 01:27:03.860799 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:03.862116 kubelet[2234]: E1213 01:27:03.862074 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 13 01:27:03.863835 kubelet[2234]: E1213 01:27:03.863386 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:03.864206 containerd[1466]: time="2024-12-13T01:27:03.864134717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:04.115524 kubelet[2234]: W1213 01:27:04.113868 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:04.115524 kubelet[2234]: E1213 01:27:04.113968 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:04.338757 kubelet[2234]: W1213 01:27:04.338669 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:04.338757 kubelet[2234]: E1213 01:27:04.338743 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:04.503405 kubelet[2234]: W1213 01:27:04.503068 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:04.503405 kubelet[2234]: E1213 01:27:04.503167 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:04.556268 kubelet[2234]: E1213 01:27:04.556197 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Dec 13 01:27:04.618450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519206529.mount: Deactivated successfully. Dec 13 01:27:04.653170 containerd[1466]: time="2024-12-13T01:27:04.651852172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:04.659142 containerd[1466]: time="2024-12-13T01:27:04.658988993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:27:04.662040 containerd[1466]: time="2024-12-13T01:27:04.660883067Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:04.669001 containerd[1466]: time="2024-12-13T01:27:04.666079062Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:04.669185 kubelet[2234]: I1213 01:27:04.667862 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:04.671326 containerd[1466]: time="2024-12-13T01:27:04.670416039Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:04.671592 kubelet[2234]: E1213 01:27:04.671495 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 13 01:27:04.673318 containerd[1466]: time="2024-12-13T01:27:04.673215260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:04.677330 containerd[1466]: time="2024-12-13T01:27:04.676717788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:04.684308 containerd[1466]: time="2024-12-13T01:27:04.684192739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:04.685594 containerd[1466]: time="2024-12-13T01:27:04.685517946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 852.177128ms" Dec 13 01:27:04.687994 containerd[1466]: time="2024-12-13T01:27:04.687645842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 823.398373ms" Dec 13 01:27:04.690443 containerd[1466]: time="2024-12-13T01:27:04.690000034Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 846.053606ms" Dec 13 01:27:04.724803 kubelet[2234]: W1213 01:27:04.724696 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:04.724803 kubelet[2234]: E1213 01:27:04.724801 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:05.258860 containerd[1466]: time="2024-12-13T01:27:05.134999384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:05.258860 containerd[1466]: time="2024-12-13T01:27:05.138321291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:05.258860 containerd[1466]: time="2024-12-13T01:27:05.138344977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:05.258860 containerd[1466]: time="2024-12-13T01:27:05.138486509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:05.259516 containerd[1466]: time="2024-12-13T01:27:05.259209058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:05.259597 containerd[1466]: time="2024-12-13T01:27:05.259546869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:05.259647 containerd[1466]: time="2024-12-13T01:27:05.259609671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:05.260000 containerd[1466]: time="2024-12-13T01:27:05.259894499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:05.261697 kubelet[2234]: E1213 01:27:05.261630 2234 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:27:05.274698 containerd[1466]: time="2024-12-13T01:27:05.273641536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:05.274698 containerd[1466]: time="2024-12-13T01:27:05.273731048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:05.274698 containerd[1466]: time="2024-12-13T01:27:05.273783299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:05.282326 containerd[1466]: time="2024-12-13T01:27:05.281284906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:05.297694 systemd[1]: Started cri-containerd-e845036322c12cd7f664d3ccee2c66a3f6c27cb1c2bb6ac79d2ec892180d698b.scope - libcontainer container e845036322c12cd7f664d3ccee2c66a3f6c27cb1c2bb6ac79d2ec892180d698b. Dec 13 01:27:05.336597 systemd[1]: Started cri-containerd-d29fe0b409abefd8e98b39f0c0572079b0507fd1381b3f96ad82f1ed206ec339.scope - libcontainer container d29fe0b409abefd8e98b39f0c0572079b0507fd1381b3f96ad82f1ed206ec339. Dec 13 01:27:05.348868 systemd[1]: Started cri-containerd-072ccd996056fff34595ab09ab9122354b32921e1551f5dcf585a4a95f7479fb.scope - libcontainer container 072ccd996056fff34595ab09ab9122354b32921e1551f5dcf585a4a95f7479fb. Dec 13 01:27:05.394032 kubelet[2234]: E1213 01:27:05.393926 2234 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109838129a2d96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:27:03.13532559 +0000 UTC m=+1.236825170,LastTimestamp:2024-12-13 01:27:03.13532559 +0000 UTC m=+1.236825170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:27:05.522526 containerd[1466]: time="2024-12-13T01:27:05.521265815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e845036322c12cd7f664d3ccee2c66a3f6c27cb1c2bb6ac79d2ec892180d698b\"" Dec 13 01:27:05.523704 kubelet[2234]: E1213 01:27:05.523669 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:05.530347 containerd[1466]: time="2024-12-13T01:27:05.530024355Z" level=info msg="CreateContainer within sandbox \"e845036322c12cd7f664d3ccee2c66a3f6c27cb1c2bb6ac79d2ec892180d698b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:27:05.570505 containerd[1466]: time="2024-12-13T01:27:05.570446086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d38e2a127342d5641584882efbb35d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"072ccd996056fff34595ab09ab9122354b32921e1551f5dcf585a4a95f7479fb\"" Dec 13 01:27:05.573324 containerd[1466]: time="2024-12-13T01:27:05.571977044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29fe0b409abefd8e98b39f0c0572079b0507fd1381b3f96ad82f1ed206ec339\"" Dec 13 01:27:05.573444 kubelet[2234]: E1213 01:27:05.573364 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:05.573444 kubelet[2234]: E1213 01:27:05.573379 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:05.578048 containerd[1466]: time="2024-12-13T01:27:05.576890989Z" level=info msg="CreateContainer within sandbox \"d29fe0b409abefd8e98b39f0c0572079b0507fd1381b3f96ad82f1ed206ec339\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:27:05.578348 containerd[1466]: time="2024-12-13T01:27:05.578063577Z" level=info msg="CreateContainer within sandbox \"072ccd996056fff34595ab09ab9122354b32921e1551f5dcf585a4a95f7479fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:27:05.643447 containerd[1466]: time="2024-12-13T01:27:05.643375287Z" level=info msg="CreateContainer within sandbox \"e845036322c12cd7f664d3ccee2c66a3f6c27cb1c2bb6ac79d2ec892180d698b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51c041adbab403fbf208bc61b375e405811544d724120a53503c44c0eabfdc03\"" Dec 13 01:27:05.648197 containerd[1466]: time="2024-12-13T01:27:05.646112829Z" level=info msg="StartContainer for \"51c041adbab403fbf208bc61b375e405811544d724120a53503c44c0eabfdc03\"" Dec 13 01:27:05.653934 containerd[1466]: time="2024-12-13T01:27:05.653563660Z" level=info msg="CreateContainer within sandbox \"d29fe0b409abefd8e98b39f0c0572079b0507fd1381b3f96ad82f1ed206ec339\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf60b031d852173ff6c842657f63bcbcdc1e0a2519577f23b74a1cd3144b7c0d\"" Dec 13 01:27:05.656622 containerd[1466]: time="2024-12-13T01:27:05.654961952Z" level=info msg="StartContainer for \"cf60b031d852173ff6c842657f63bcbcdc1e0a2519577f23b74a1cd3144b7c0d\"" Dec 13 01:27:05.667794 containerd[1466]: time="2024-12-13T01:27:05.667654778Z" level=info msg="CreateContainer within sandbox \"072ccd996056fff34595ab09ab9122354b32921e1551f5dcf585a4a95f7479fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30e66ed4e960fcb2c50e02dc5d31e5bc13ab3474af792a0f08cb286f6c8dc745\"" Dec 13 01:27:05.668936 containerd[1466]: time="2024-12-13T01:27:05.668877022Z" level=info msg="StartContainer for \"30e66ed4e960fcb2c50e02dc5d31e5bc13ab3474af792a0f08cb286f6c8dc745\"" Dec 13 01:27:05.742778 systemd[1]: Started cri-containerd-51c041adbab403fbf208bc61b375e405811544d724120a53503c44c0eabfdc03.scope - libcontainer container 51c041adbab403fbf208bc61b375e405811544d724120a53503c44c0eabfdc03. Dec 13 01:27:05.749096 systemd[1]: Started cri-containerd-30e66ed4e960fcb2c50e02dc5d31e5bc13ab3474af792a0f08cb286f6c8dc745.scope - libcontainer container 30e66ed4e960fcb2c50e02dc5d31e5bc13ab3474af792a0f08cb286f6c8dc745. Dec 13 01:27:05.777594 systemd[1]: Started cri-containerd-cf60b031d852173ff6c842657f63bcbcdc1e0a2519577f23b74a1cd3144b7c0d.scope - libcontainer container cf60b031d852173ff6c842657f63bcbcdc1e0a2519577f23b74a1cd3144b7c0d. Dec 13 01:27:05.834540 containerd[1466]: time="2024-12-13T01:27:05.833269881Z" level=info msg="StartContainer for \"51c041adbab403fbf208bc61b375e405811544d724120a53503c44c0eabfdc03\" returns successfully" Dec 13 01:27:05.851654 containerd[1466]: time="2024-12-13T01:27:05.851569596Z" level=info msg="StartContainer for \"30e66ed4e960fcb2c50e02dc5d31e5bc13ab3474af792a0f08cb286f6c8dc745\" returns successfully" Dec 13 01:27:05.851784 containerd[1466]: time="2024-12-13T01:27:05.851569876Z" level=info msg="StartContainer for \"cf60b031d852173ff6c842657f63bcbcdc1e0a2519577f23b74a1cd3144b7c0d\" returns successfully" Dec 13 01:27:06.268784 kubelet[2234]: E1213 01:27:06.268249 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:06.272888 kubelet[2234]: I1213 01:27:06.272843 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:06.276076 kubelet[2234]: E1213 01:27:06.275310 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:06.276076 kubelet[2234]: E1213 01:27:06.275933 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:07.284620 kubelet[2234]: E1213 01:27:07.284559 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:07.586148 kubelet[2234]: E1213 01:27:07.585018 2234 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:27:07.669634 kubelet[2234]: I1213 01:27:07.669569 2234 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:27:08.133256 kubelet[2234]: I1213 01:27:08.133173 2234 apiserver.go:52] "Watching apiserver" Dec 13 01:27:08.150201 kubelet[2234]: I1213 01:27:08.150121 2234 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:09.077023 kubelet[2234]: E1213 01:27:09.076984 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:09.286596 kubelet[2234]: E1213 01:27:09.286544 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:09.625179 kubelet[2234]: E1213 01:27:09.625101 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:10.287568 kubelet[2234]: E1213 01:27:10.287524 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:11.119605 systemd[1]: Reloading requested from client PID 2513 ('systemctl') (unit session-7.scope)... Dec 13 01:27:11.119629 systemd[1]: Reloading... Dec 13 01:27:11.206516 zram_generator::config[2555]: No configuration found. Dec 13 01:27:11.359923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:11.485036 systemd[1]: Reloading finished in 364 ms. Dec 13 01:27:11.538036 kubelet[2234]: I1213 01:27:11.537841 2234 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:11.537889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:11.551521 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:11.551950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:11.552031 systemd[1]: kubelet.service: Consumed 1.842s CPU time, 116.5M memory peak, 0B memory swap peak. Dec 13 01:27:11.564810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:11.740714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:11.747615 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:11.815327 kubelet[2597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:11.815327 kubelet[2597]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:11.815327 kubelet[2597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:11.815327 kubelet[2597]: I1213 01:27:11.814760 2597 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:11.821527 kubelet[2597]: I1213 01:27:11.821483 2597 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:11.821527 kubelet[2597]: I1213 01:27:11.821513 2597 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:11.821863 kubelet[2597]: I1213 01:27:11.821841 2597 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:11.823191 kubelet[2597]: I1213 01:27:11.823157 2597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:27:11.824933 kubelet[2597]: I1213 01:27:11.824871 2597 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:11.835572 kubelet[2597]: I1213 01:27:11.835515 2597 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:11.848599 kubelet[2597]: I1213 01:27:11.848531 2597 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:11.848822 kubelet[2597]: I1213 01:27:11.848753 2597 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:11.848822 kubelet[2597]: I1213 01:27:11.848780 2597 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:11.848822 kubelet[2597]: I1213 01:27:11.848790 2597 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:11.849005 kubelet[2597]: I1213 01:27:11.848831 2597 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:11.849005 kubelet[2597]: I1213 01:27:11.848964 2597 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:11.849005 kubelet[2597]: I1213 01:27:11.848980 2597 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:11.849005 kubelet[2597]: I1213 01:27:11.849010 2597 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:11.849141 kubelet[2597]: I1213 01:27:11.849027 2597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:11.850134 kubelet[2597]: I1213 01:27:11.850095 2597 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:11.850415 kubelet[2597]: I1213 01:27:11.850398 2597 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:11.851017 kubelet[2597]: I1213 01:27:11.850980 2597 server.go:1256] "Started kubelet" Dec 13 01:27:11.852547 kubelet[2597]: I1213 01:27:11.852351 2597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:11.861658 kubelet[2597]: I1213 01:27:11.861613 2597 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:11.862496 kubelet[2597]: I1213 01:27:11.862453 2597 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:11.864283 kubelet[2597]: I1213 01:27:11.863895 2597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:11.864954 kubelet[2597]: I1213 01:27:11.864800 2597 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:11.867815 kubelet[2597]: I1213 01:27:11.867709 2597 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:11.868757 kubelet[2597]: I1213 01:27:11.868699 2597 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:11.868842 kubelet[2597]: I1213 01:27:11.868824 2597 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:11.870997 kubelet[2597]: I1213 01:27:11.870941 2597 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:11.872574 kubelet[2597]: I1213 01:27:11.872337 2597 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:11.872870 kubelet[2597]: E1213 01:27:11.872838 2597 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:11.875387 kubelet[2597]: I1213 01:27:11.875192 2597 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:11.878159 kubelet[2597]: I1213 01:27:11.878107 2597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:11.883359 kubelet[2597]: I1213 01:27:11.882567 2597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:11.883359 kubelet[2597]: I1213 01:27:11.882648 2597 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:11.883359 kubelet[2597]: I1213 01:27:11.882685 2597 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:11.883359 kubelet[2597]: E1213 01:27:11.882796 2597 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:11.929055 kubelet[2597]: I1213 01:27:11.928943 2597 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:11.929055 kubelet[2597]: I1213 01:27:11.928976 2597 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:11.929055 kubelet[2597]: I1213 01:27:11.929012 2597 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:11.929392 kubelet[2597]: I1213 01:27:11.929275 2597 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:27:11.929439 kubelet[2597]: I1213 01:27:11.929425 2597 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:27:11.929439 kubelet[2597]: I1213 01:27:11.929438 2597 policy_none.go:49] "None policy: Start" Dec 13 01:27:11.930423 kubelet[2597]: I1213 01:27:11.930400 2597 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:11.930503 kubelet[2597]: I1213 01:27:11.930428 2597 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:11.930640 kubelet[2597]: I1213 01:27:11.930617 2597 state_mem.go:75] "Updated machine memory state" Dec 13 01:27:11.942893 kubelet[2597]: I1213 01:27:11.942838 2597 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:11.944340 kubelet[2597]: I1213 01:27:11.943267 2597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:11.984541 kubelet[2597]: I1213 01:27:11.983127 2597 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:27:11.987362 kubelet[2597]: I1213 01:27:11.986265 2597 topology_manager.go:215] "Topology Admit Handler" podUID="0d38e2a127342d5641584882efbb35d2" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:27:11.987362 kubelet[2597]: I1213 01:27:11.986388 2597 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:27:11.987362 kubelet[2597]: I1213 01:27:11.987192 2597 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:12.005811 kubelet[2597]: E1213 01:27:12.005202 2597 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:27:12.008262 kubelet[2597]: E1213 01:27:12.007359 2597 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:12.014446 kubelet[2597]: I1213 01:27:12.014386 2597 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:27:12.014625 kubelet[2597]: I1213 01:27:12.014537 2597 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:27:12.172066 kubelet[2597]: I1213 01:27:12.171954 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:12.172066 kubelet[2597]: I1213 01:27:12.172029 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:27:12.172066 kubelet[2597]: I1213 01:27:12.172060 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:12.172371 kubelet[2597]: I1213 01:27:12.172091 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:12.172371 kubelet[2597]: I1213 01:27:12.172130 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:12.172371 kubelet[2597]: I1213 01:27:12.172157 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:12.172371 kubelet[2597]: I1213 01:27:12.172184 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:12.172371 kubelet[2597]: I1213 01:27:12.172223 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:12.172511 kubelet[2597]: I1213 01:27:12.172264 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:12.308407 kubelet[2597]: E1213 01:27:12.307688 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:12.308407 kubelet[2597]: E1213 01:27:12.308203 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:12.308648 kubelet[2597]: E1213 01:27:12.308597 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:12.850732 kubelet[2597]: I1213 01:27:12.850438 2597 apiserver.go:52] "Watching apiserver" Dec 13 01:27:12.872650 kubelet[2597]: I1213 01:27:12.872591 2597 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:12.895453 kubelet[2597]: E1213 01:27:12.894909 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:12.895453 kubelet[2597]: E1213 01:27:12.895325 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:12.903474 kubelet[2597]: E1213 01:27:12.902740 2597 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:27:12.903474 kubelet[2597]: E1213 01:27:12.903171 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:13.021612 kubelet[2597]: I1213 01:27:13.019957 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.019889746 podStartE2EDuration="4.019889746s" podCreationTimestamp="2024-12-13 01:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:13.01968222 +0000 UTC m=+1.265007305" watchObservedRunningTime="2024-12-13 01:27:13.019889746 +0000 UTC m=+1.265214841" Dec 13 01:27:13.116891 kubelet[2597]: I1213 01:27:13.116604 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.116557142 podStartE2EDuration="2.116557142s" podCreationTimestamp="2024-12-13 01:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:13.116529679 +0000 UTC m=+1.361854775" watchObservedRunningTime="2024-12-13 01:27:13.116557142 +0000 UTC m=+1.361882237" Dec 13 01:27:13.897006 kubelet[2597]: E1213 01:27:13.896959 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:13.898592 kubelet[2597]: E1213 01:27:13.898563 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:15.234543 kubelet[2597]: E1213 01:27:15.234466 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:15.253873 kubelet[2597]: I1213 01:27:15.253723 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.253628755 podStartE2EDuration="6.253628755s" podCreationTimestamp="2024-12-13 01:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:13.137543803 +0000 UTC m=+1.382868898" watchObservedRunningTime="2024-12-13 01:27:15.253628755 +0000 UTC m=+3.498953850" Dec 13 01:27:15.900348 kubelet[2597]: E1213 01:27:15.900310 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:16.360331 update_engine[1456]: I20241213 01:27:16.359820 1456 update_attempter.cc:509] Updating boot flags... Dec 13 01:27:16.415318 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2679) Dec 13 01:27:16.475337 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2678) Dec 13 01:27:16.518403 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2678) Dec 13 01:27:16.924650 sudo[1652]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:16.937816 sshd[1649]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:16.944960 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:37714.service: Deactivated successfully. Dec 13 01:27:16.948662 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:27:16.948964 systemd[1]: session-7.scope: Consumed 6.005s CPU time, 191.9M memory peak, 0B memory swap peak. Dec 13 01:27:16.952707 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:27:16.956112 systemd-logind[1455]: Removed session 7. Dec 13 01:27:18.261503 kubelet[2597]: E1213 01:27:18.261463 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:18.905326 kubelet[2597]: E1213 01:27:18.905258 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:20.143569 kubelet[2597]: E1213 01:27:20.143528 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:20.909260 kubelet[2597]: E1213 01:27:20.909200 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:21.910688 kubelet[2597]: E1213 01:27:21.910644 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:25.103681 kubelet[2597]: I1213 01:27:25.103630 2597 topology_manager.go:215] "Topology Admit Handler" podUID="66429835-0e1f-4067-b8fd-06c11e8cf831" podNamespace="kube-system" podName="kube-proxy-6ds9n" Dec 13 01:27:25.112879 systemd[1]: Created slice kubepods-besteffort-pod66429835_0e1f_4067_b8fd_06c11e8cf831.slice - libcontainer container kubepods-besteffort-pod66429835_0e1f_4067_b8fd_06c11e8cf831.slice. Dec 13 01:27:25.128784 kubelet[2597]: I1213 01:27:25.128741 2597 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:27:25.129256 containerd[1466]: time="2024-12-13T01:27:25.129212829Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:27:25.129614 kubelet[2597]: I1213 01:27:25.129433 2597 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:27:25.152393 kubelet[2597]: I1213 01:27:25.152330 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66429835-0e1f-4067-b8fd-06c11e8cf831-kube-proxy\") pod \"kube-proxy-6ds9n\" (UID: \"66429835-0e1f-4067-b8fd-06c11e8cf831\") " pod="kube-system/kube-proxy-6ds9n" Dec 13 01:27:25.152393 kubelet[2597]: I1213 01:27:25.152381 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66429835-0e1f-4067-b8fd-06c11e8cf831-xtables-lock\") pod \"kube-proxy-6ds9n\" (UID: \"66429835-0e1f-4067-b8fd-06c11e8cf831\") " pod="kube-system/kube-proxy-6ds9n" Dec 13 01:27:25.152393 kubelet[2597]: I1213 01:27:25.152405 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66429835-0e1f-4067-b8fd-06c11e8cf831-lib-modules\") pod \"kube-proxy-6ds9n\" (UID: \"66429835-0e1f-4067-b8fd-06c11e8cf831\") " pod="kube-system/kube-proxy-6ds9n" Dec 13 01:27:25.152611 kubelet[2597]: I1213 01:27:25.152430 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxxq\" (UniqueName: \"kubernetes.io/projected/66429835-0e1f-4067-b8fd-06c11e8cf831-kube-api-access-wgxxq\") pod \"kube-proxy-6ds9n\" (UID: \"66429835-0e1f-4067-b8fd-06c11e8cf831\") " pod="kube-system/kube-proxy-6ds9n" Dec 13 01:27:25.260777 kubelet[2597]: E1213 01:27:25.260739 2597 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:27:25.260777 kubelet[2597]: E1213 01:27:25.260767 2597 projected.go:200] Error preparing data for projected volume kube-api-access-wgxxq for pod kube-system/kube-proxy-6ds9n: configmap "kube-root-ca.crt" not found Dec 13 01:27:25.260975 kubelet[2597]: E1213 01:27:25.260829 2597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/66429835-0e1f-4067-b8fd-06c11e8cf831-kube-api-access-wgxxq podName:66429835-0e1f-4067-b8fd-06c11e8cf831 nodeName:}" failed. No retries permitted until 2024-12-13 01:27:25.76080819 +0000 UTC m=+14.006133285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wgxxq" (UniqueName: "kubernetes.io/projected/66429835-0e1f-4067-b8fd-06c11e8cf831-kube-api-access-wgxxq") pod "kube-proxy-6ds9n" (UID: "66429835-0e1f-4067-b8fd-06c11e8cf831") : configmap "kube-root-ca.crt" not found Dec 13 01:27:25.589906 kubelet[2597]: I1213 01:27:25.589849 2597 topology_manager.go:215] "Topology Admit Handler" podUID="eac70eca-003b-41f3-aa56-72ba830d49de" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-4rtg8" Dec 13 01:27:25.599113 systemd[1]: Created slice kubepods-besteffort-podeac70eca_003b_41f3_aa56_72ba830d49de.slice - libcontainer container kubepods-besteffort-podeac70eca_003b_41f3_aa56_72ba830d49de.slice. Dec 13 01:27:25.656439 kubelet[2597]: I1213 01:27:25.656396 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eac70eca-003b-41f3-aa56-72ba830d49de-var-lib-calico\") pod \"tigera-operator-c7ccbd65-4rtg8\" (UID: \"eac70eca-003b-41f3-aa56-72ba830d49de\") " pod="tigera-operator/tigera-operator-c7ccbd65-4rtg8" Dec 13 01:27:25.656592 kubelet[2597]: I1213 01:27:25.656455 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5ft\" (UniqueName: \"kubernetes.io/projected/eac70eca-003b-41f3-aa56-72ba830d49de-kube-api-access-cz5ft\") pod \"tigera-operator-c7ccbd65-4rtg8\" (UID: \"eac70eca-003b-41f3-aa56-72ba830d49de\") " pod="tigera-operator/tigera-operator-c7ccbd65-4rtg8" Dec 13 01:27:25.902852 containerd[1466]: time="2024-12-13T01:27:25.902714272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-4rtg8,Uid:eac70eca-003b-41f3-aa56-72ba830d49de,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:27:25.931026 containerd[1466]: time="2024-12-13T01:27:25.930916614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:25.931026 containerd[1466]: time="2024-12-13T01:27:25.930986887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:25.931026 containerd[1466]: time="2024-12-13T01:27:25.931000662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:25.931213 containerd[1466]: time="2024-12-13T01:27:25.931091835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:25.949435 systemd[1]: Started cri-containerd-03ebc646cde83e0182b1dda9f2ea06da5d7d706bd3a65d7e1d036733325f7b9f.scope - libcontainer container 03ebc646cde83e0182b1dda9f2ea06da5d7d706bd3a65d7e1d036733325f7b9f. Dec 13 01:27:25.989123 containerd[1466]: time="2024-12-13T01:27:25.989078268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-4rtg8,Uid:eac70eca-003b-41f3-aa56-72ba830d49de,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"03ebc646cde83e0182b1dda9f2ea06da5d7d706bd3a65d7e1d036733325f7b9f\"" Dec 13 01:27:25.990809 containerd[1466]: time="2024-12-13T01:27:25.990776256Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:27:26.022894 kubelet[2597]: E1213 01:27:26.022863 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:26.024500 containerd[1466]: time="2024-12-13T01:27:26.023405161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ds9n,Uid:66429835-0e1f-4067-b8fd-06c11e8cf831,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:26.049184 containerd[1466]: time="2024-12-13T01:27:26.049057644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:26.049652 containerd[1466]: time="2024-12-13T01:27:26.049538873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:26.050240 containerd[1466]: time="2024-12-13T01:27:26.050185945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:26.050508 containerd[1466]: time="2024-12-13T01:27:26.050437009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:26.068427 systemd[1]: Started cri-containerd-7970b6aa35528f04a28df79b69c66dcbd4bca15ceba12912af045ae2d792ae50.scope - libcontainer container 7970b6aa35528f04a28df79b69c66dcbd4bca15ceba12912af045ae2d792ae50. Dec 13 01:27:26.092518 containerd[1466]: time="2024-12-13T01:27:26.092466378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ds9n,Uid:66429835-0e1f-4067-b8fd-06c11e8cf831,Namespace:kube-system,Attempt:0,} returns sandbox id \"7970b6aa35528f04a28df79b69c66dcbd4bca15ceba12912af045ae2d792ae50\"" Dec 13 01:27:26.093186 kubelet[2597]: E1213 01:27:26.093163 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:26.095917 containerd[1466]: time="2024-12-13T01:27:26.095865056Z" level=info msg="CreateContainer within sandbox \"7970b6aa35528f04a28df79b69c66dcbd4bca15ceba12912af045ae2d792ae50\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:27:26.118318 containerd[1466]: time="2024-12-13T01:27:26.118241452Z" level=info msg="CreateContainer within sandbox \"7970b6aa35528f04a28df79b69c66dcbd4bca15ceba12912af045ae2d792ae50\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a49ef87831f91cdc6f0f10fc37666fa4c0f5d851a1d9fe5077853490c3ef35f\"" Dec 13 01:27:26.119450 containerd[1466]: time="2024-12-13T01:27:26.119416531Z" level=info msg="StartContainer for \"2a49ef87831f91cdc6f0f10fc37666fa4c0f5d851a1d9fe5077853490c3ef35f\"" Dec 13 01:27:26.159449 systemd[1]: Started cri-containerd-2a49ef87831f91cdc6f0f10fc37666fa4c0f5d851a1d9fe5077853490c3ef35f.scope - libcontainer container 2a49ef87831f91cdc6f0f10fc37666fa4c0f5d851a1d9fe5077853490c3ef35f. Dec 13 01:27:26.190943 containerd[1466]: time="2024-12-13T01:27:26.190893759Z" level=info msg="StartContainer for \"2a49ef87831f91cdc6f0f10fc37666fa4c0f5d851a1d9fe5077853490c3ef35f\" returns successfully" Dec 13 01:27:26.920199 kubelet[2597]: E1213 01:27:26.920159 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:26.927793 kubelet[2597]: I1213 01:27:26.927754 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6ds9n" podStartSLOduration=1.9277124369999998 podStartE2EDuration="1.927712437s" podCreationTimestamp="2024-12-13 01:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:26.927420506 +0000 UTC m=+15.172745601" watchObservedRunningTime="2024-12-13 01:27:26.927712437 +0000 UTC m=+15.173037532" Dec 13 01:27:28.183044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381107162.mount: Deactivated successfully. Dec 13 01:27:28.601187 containerd[1466]: time="2024-12-13T01:27:28.601116820Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.601857 containerd[1466]: time="2024-12-13T01:27:28.601782696Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764297" Dec 13 01:27:28.602994 containerd[1466]: time="2024-12-13T01:27:28.602956561Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.605100 containerd[1466]: time="2024-12-13T01:27:28.605074607Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.605968 containerd[1466]: time="2024-12-13T01:27:28.605920764Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.615102118s" Dec 13 01:27:28.606017 containerd[1466]: time="2024-12-13T01:27:28.605975247Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:27:28.610035 containerd[1466]: time="2024-12-13T01:27:28.610001394Z" level=info msg="CreateContainer within sandbox \"03ebc646cde83e0182b1dda9f2ea06da5d7d706bd3a65d7e1d036733325f7b9f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:27:28.623152 containerd[1466]: time="2024-12-13T01:27:28.623099488Z" level=info msg="CreateContainer within sandbox \"03ebc646cde83e0182b1dda9f2ea06da5d7d706bd3a65d7e1d036733325f7b9f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"84d9021581a8505e72f2457b35fe0ea4061533f0833fb03270d78159fcee19d7\"" Dec 13 01:27:28.623595 containerd[1466]: time="2024-12-13T01:27:28.623567841Z" level=info msg="StartContainer for \"84d9021581a8505e72f2457b35fe0ea4061533f0833fb03270d78159fcee19d7\"" Dec 13 01:27:28.654430 systemd[1]: Started cri-containerd-84d9021581a8505e72f2457b35fe0ea4061533f0833fb03270d78159fcee19d7.scope - libcontainer container 84d9021581a8505e72f2457b35fe0ea4061533f0833fb03270d78159fcee19d7. Dec 13 01:27:28.683788 containerd[1466]: time="2024-12-13T01:27:28.683745296Z" level=info msg="StartContainer for \"84d9021581a8505e72f2457b35fe0ea4061533f0833fb03270d78159fcee19d7\" returns successfully" Dec 13 01:27:31.596439 kubelet[2597]: I1213 01:27:31.596384 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-4rtg8" podStartSLOduration=3.978126583 podStartE2EDuration="6.596325145s" podCreationTimestamp="2024-12-13 01:27:25 +0000 UTC" firstStartedPulling="2024-12-13 01:27:25.990336445 +0000 UTC m=+14.235661540" lastFinishedPulling="2024-12-13 01:27:28.608535007 +0000 UTC m=+16.853860102" observedRunningTime="2024-12-13 01:27:28.948853951 +0000 UTC m=+17.194179046" watchObservedRunningTime="2024-12-13 01:27:31.596325145 +0000 UTC m=+19.841650240" Dec 13 01:27:31.597228 kubelet[2597]: I1213 01:27:31.596737 2597 topology_manager.go:215] "Topology Admit Handler" podUID="cc2f5fdb-5130-4ebd-8ec5-4136a8981aab" podNamespace="calico-system" podName="calico-typha-bcdb4899-7vfxg" Dec 13 01:27:31.608837 systemd[1]: Created slice kubepods-besteffort-podcc2f5fdb_5130_4ebd_8ec5_4136a8981aab.slice - libcontainer container kubepods-besteffort-podcc2f5fdb_5130_4ebd_8ec5_4136a8981aab.slice. Dec 13 01:27:31.676790 kubelet[2597]: I1213 01:27:31.676737 2597 topology_manager.go:215] "Topology Admit Handler" podUID="78bc9f6f-9978-40c0-9dc0-da6219d70dd6" podNamespace="calico-system" podName="calico-node-hmrqg" Dec 13 01:27:31.687549 systemd[1]: Created slice kubepods-besteffort-pod78bc9f6f_9978_40c0_9dc0_da6219d70dd6.slice - libcontainer container kubepods-besteffort-pod78bc9f6f_9978_40c0_9dc0_da6219d70dd6.slice. Dec 13 01:27:31.694596 kubelet[2597]: I1213 01:27:31.694551 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-xtables-lock\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.694596 kubelet[2597]: I1213 01:27:31.694594 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-cni-log-dir\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.694791 kubelet[2597]: I1213 01:27:31.694621 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc2f5fdb-5130-4ebd-8ec5-4136a8981aab-tigera-ca-bundle\") pod \"calico-typha-bcdb4899-7vfxg\" (UID: \"cc2f5fdb-5130-4ebd-8ec5-4136a8981aab\") " pod="calico-system/calico-typha-bcdb4899-7vfxg" Dec 13 01:27:31.694791 kubelet[2597]: I1213 01:27:31.694643 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-var-run-calico\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.694791 kubelet[2597]: I1213 01:27:31.694762 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-var-lib-calico\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.694984 kubelet[2597]: I1213 01:27:31.694809 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-cni-bin-dir\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.694984 kubelet[2597]: I1213 01:27:31.694856 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc2f5fdb-5130-4ebd-8ec5-4136a8981aab-typha-certs\") pod \"calico-typha-bcdb4899-7vfxg\" (UID: \"cc2f5fdb-5130-4ebd-8ec5-4136a8981aab\") " pod="calico-system/calico-typha-bcdb4899-7vfxg" Dec 13 01:27:31.694984 kubelet[2597]: I1213 01:27:31.694880 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-tigera-ca-bundle\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.694984 kubelet[2597]: I1213 01:27:31.694903 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kbnx\" (UniqueName: \"kubernetes.io/projected/cc2f5fdb-5130-4ebd-8ec5-4136a8981aab-kube-api-access-4kbnx\") pod \"calico-typha-bcdb4899-7vfxg\" (UID: \"cc2f5fdb-5130-4ebd-8ec5-4136a8981aab\") " pod="calico-system/calico-typha-bcdb4899-7vfxg" Dec 13 01:27:31.694984 kubelet[2597]: I1213 01:27:31.694943 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-node-certs\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.695208 kubelet[2597]: I1213 01:27:31.694976 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5v5\" (UniqueName: \"kubernetes.io/projected/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-kube-api-access-wv5v5\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.695208 kubelet[2597]: I1213 01:27:31.695020 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-flexvol-driver-host\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.695208 kubelet[2597]: I1213 01:27:31.695041 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-cni-net-dir\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.695208 kubelet[2597]: I1213 01:27:31.695077 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-lib-modules\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.695208 kubelet[2597]: I1213 01:27:31.695099 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78bc9f6f-9978-40c0-9dc0-da6219d70dd6-policysync\") pod \"calico-node-hmrqg\" (UID: \"78bc9f6f-9978-40c0-9dc0-da6219d70dd6\") " pod="calico-system/calico-node-hmrqg" Dec 13 01:27:31.797760 kubelet[2597]: E1213 01:27:31.797703 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.797760 kubelet[2597]: W1213 01:27:31.797734 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.798013 kubelet[2597]: E1213 01:27:31.797782 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.798508 kubelet[2597]: E1213 01:27:31.798268 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.798508 kubelet[2597]: W1213 01:27:31.798314 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.798508 kubelet[2597]: E1213 01:27:31.798357 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.798827 kubelet[2597]: E1213 01:27:31.798811 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.798915 kubelet[2597]: W1213 01:27:31.798900 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.798996 kubelet[2597]: E1213 01:27:31.798982 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.799380 kubelet[2597]: E1213 01:27:31.799365 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.799467 kubelet[2597]: W1213 01:27:31.799452 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.801419 kubelet[2597]: E1213 01:27:31.799584 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.802047 kubelet[2597]: E1213 01:27:31.801932 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.802047 kubelet[2597]: W1213 01:27:31.801953 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.802397 kubelet[2597]: E1213 01:27:31.802356 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.803632 kubelet[2597]: E1213 01:27:31.803158 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.803632 kubelet[2597]: W1213 01:27:31.803173 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.803784 kubelet[2597]: E1213 01:27:31.803765 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.805420 kubelet[2597]: E1213 01:27:31.805000 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.805420 kubelet[2597]: W1213 01:27:31.805017 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.805420 kubelet[2597]: E1213 01:27:31.805038 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.805771 kubelet[2597]: E1213 01:27:31.805754 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.805854 kubelet[2597]: W1213 01:27:31.805838 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.805952 kubelet[2597]: E1213 01:27:31.805939 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.807050 kubelet[2597]: E1213 01:27:31.807005 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.807181 kubelet[2597]: W1213 01:27:31.807131 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.807379 kubelet[2597]: E1213 01:27:31.807230 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.808185 kubelet[2597]: E1213 01:27:31.808155 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.808185 kubelet[2597]: W1213 01:27:31.808170 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.808423 kubelet[2597]: E1213 01:27:31.808330 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.824492 kubelet[2597]: E1213 01:27:31.824440 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.824492 kubelet[2597]: W1213 01:27:31.824479 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.824895 kubelet[2597]: E1213 01:27:31.824827 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.825257 kubelet[2597]: E1213 01:27:31.825138 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.825257 kubelet[2597]: W1213 01:27:31.825167 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.825257 kubelet[2597]: E1213 01:27:31.825205 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.827631 kubelet[2597]: E1213 01:27:31.827595 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.827631 kubelet[2597]: W1213 01:27:31.827624 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.827789 kubelet[2597]: E1213 01:27:31.827657 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.828280 kubelet[2597]: E1213 01:27:31.828237 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.828280 kubelet[2597]: W1213 01:27:31.828259 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.828415 kubelet[2597]: E1213 01:27:31.828384 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.837994 kubelet[2597]: I1213 01:27:31.837745 2597 topology_manager.go:215] "Topology Admit Handler" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" podNamespace="calico-system" podName="csi-node-driver-54ctg" Dec 13 01:27:31.839649 kubelet[2597]: E1213 01:27:31.839624 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54ctg" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" Dec 13 01:27:31.891960 kubelet[2597]: E1213 01:27:31.891819 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.891960 kubelet[2597]: W1213 01:27:31.891843 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.891960 kubelet[2597]: E1213 01:27:31.891868 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.892164 kubelet[2597]: E1213 01:27:31.892126 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.892164 kubelet[2597]: W1213 01:27:31.892136 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.892164 kubelet[2597]: E1213 01:27:31.892152 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.894503 kubelet[2597]: E1213 01:27:31.894465 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.894503 kubelet[2597]: W1213 01:27:31.894499 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.894612 kubelet[2597]: E1213 01:27:31.894536 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.894967 kubelet[2597]: E1213 01:27:31.894942 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.894967 kubelet[2597]: W1213 01:27:31.894959 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.895039 kubelet[2597]: E1213 01:27:31.894975 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.895278 kubelet[2597]: E1213 01:27:31.895253 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.895278 kubelet[2597]: W1213 01:27:31.895273 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.895369 kubelet[2597]: E1213 01:27:31.895315 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.895577 kubelet[2597]: E1213 01:27:31.895558 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.895577 kubelet[2597]: W1213 01:27:31.895575 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.895642 kubelet[2597]: E1213 01:27:31.895591 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.895885 kubelet[2597]: E1213 01:27:31.895850 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.895885 kubelet[2597]: W1213 01:27:31.895884 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.895959 kubelet[2597]: E1213 01:27:31.895901 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.896234 kubelet[2597]: E1213 01:27:31.896208 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.896234 kubelet[2597]: W1213 01:27:31.896229 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.896333 kubelet[2597]: E1213 01:27:31.896244 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.896543 kubelet[2597]: E1213 01:27:31.896521 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.896543 kubelet[2597]: W1213 01:27:31.896538 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.896608 kubelet[2597]: E1213 01:27:31.896555 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.896802 kubelet[2597]: E1213 01:27:31.896785 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.896802 kubelet[2597]: W1213 01:27:31.896799 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.896876 kubelet[2597]: E1213 01:27:31.896814 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.897089 kubelet[2597]: E1213 01:27:31.897070 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.897089 kubelet[2597]: W1213 01:27:31.897085 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.897159 kubelet[2597]: E1213 01:27:31.897100 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.897387 kubelet[2597]: E1213 01:27:31.897364 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.897387 kubelet[2597]: W1213 01:27:31.897379 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.897472 kubelet[2597]: E1213 01:27:31.897393 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.897726 kubelet[2597]: E1213 01:27:31.897706 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.897726 kubelet[2597]: W1213 01:27:31.897721 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.897788 kubelet[2597]: E1213 01:27:31.897737 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.897998 kubelet[2597]: E1213 01:27:31.897977 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.897998 kubelet[2597]: W1213 01:27:31.897992 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.898072 kubelet[2597]: E1213 01:27:31.898010 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.898273 kubelet[2597]: E1213 01:27:31.898253 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.898273 kubelet[2597]: W1213 01:27:31.898267 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.898367 kubelet[2597]: E1213 01:27:31.898282 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.898591 kubelet[2597]: E1213 01:27:31.898570 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.898591 kubelet[2597]: W1213 01:27:31.898585 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.898657 kubelet[2597]: E1213 01:27:31.898600 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.898896 kubelet[2597]: E1213 01:27:31.898876 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.898896 kubelet[2597]: W1213 01:27:31.898891 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.898956 kubelet[2597]: E1213 01:27:31.898907 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.899197 kubelet[2597]: E1213 01:27:31.899178 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.899197 kubelet[2597]: W1213 01:27:31.899192 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.899261 kubelet[2597]: E1213 01:27:31.899208 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.899493 kubelet[2597]: E1213 01:27:31.899473 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.899493 kubelet[2597]: W1213 01:27:31.899488 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.899565 kubelet[2597]: E1213 01:27:31.899504 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.899775 kubelet[2597]: E1213 01:27:31.899753 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.899775 kubelet[2597]: W1213 01:27:31.899770 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.899835 kubelet[2597]: E1213 01:27:31.899790 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.900182 kubelet[2597]: E1213 01:27:31.900155 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.900182 kubelet[2597]: W1213 01:27:31.900171 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.900393 kubelet[2597]: E1213 01:27:31.900186 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.900393 kubelet[2597]: I1213 01:27:31.900231 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca25e48b-50ec-452e-a7dc-d26850ad2858-socket-dir\") pod \"csi-node-driver-54ctg\" (UID: \"ca25e48b-50ec-452e-a7dc-d26850ad2858\") " pod="calico-system/csi-node-driver-54ctg" Dec 13 01:27:31.900560 kubelet[2597]: E1213 01:27:31.900528 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.900560 kubelet[2597]: W1213 01:27:31.900544 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.900620 kubelet[2597]: E1213 01:27:31.900567 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.900620 kubelet[2597]: I1213 01:27:31.900595 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca25e48b-50ec-452e-a7dc-d26850ad2858-kubelet-dir\") pod \"csi-node-driver-54ctg\" (UID: \"ca25e48b-50ec-452e-a7dc-d26850ad2858\") " pod="calico-system/csi-node-driver-54ctg" Dec 13 01:27:31.900846 kubelet[2597]: E1213 01:27:31.900828 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.900846 kubelet[2597]: W1213 01:27:31.900842 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.900916 kubelet[2597]: E1213 01:27:31.900862 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.900916 kubelet[2597]: I1213 01:27:31.900887 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca25e48b-50ec-452e-a7dc-d26850ad2858-registration-dir\") pod \"csi-node-driver-54ctg\" (UID: \"ca25e48b-50ec-452e-a7dc-d26850ad2858\") " pod="calico-system/csi-node-driver-54ctg" Dec 13 01:27:31.901158 kubelet[2597]: E1213 01:27:31.901139 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.901158 kubelet[2597]: W1213 01:27:31.901156 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.901242 kubelet[2597]: E1213 01:27:31.901177 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.901242 kubelet[2597]: I1213 01:27:31.901204 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ca25e48b-50ec-452e-a7dc-d26850ad2858-varrun\") pod \"csi-node-driver-54ctg\" (UID: \"ca25e48b-50ec-452e-a7dc-d26850ad2858\") " pod="calico-system/csi-node-driver-54ctg" Dec 13 01:27:31.901545 kubelet[2597]: E1213 01:27:31.901521 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.901545 kubelet[2597]: W1213 01:27:31.901537 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.901616 kubelet[2597]: E1213 01:27:31.901558 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.901616 kubelet[2597]: I1213 01:27:31.901588 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbkk8\" (UniqueName: \"kubernetes.io/projected/ca25e48b-50ec-452e-a7dc-d26850ad2858-kube-api-access-lbkk8\") pod \"csi-node-driver-54ctg\" (UID: \"ca25e48b-50ec-452e-a7dc-d26850ad2858\") " pod="calico-system/csi-node-driver-54ctg" Dec 13 01:27:31.902037 kubelet[2597]: E1213 01:27:31.902015 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.902037 kubelet[2597]: W1213 01:27:31.902031 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.902206 kubelet[2597]: E1213 01:27:31.902181 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.902353 kubelet[2597]: E1213 01:27:31.902331 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.902353 kubelet[2597]: W1213 01:27:31.902346 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.902587 kubelet[2597]: E1213 01:27:31.902453 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.902699 kubelet[2597]: E1213 01:27:31.902677 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.902699 kubelet[2597]: W1213 01:27:31.902695 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.902870 kubelet[2597]: E1213 01:27:31.902847 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.902996 kubelet[2597]: E1213 01:27:31.902976 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.902996 kubelet[2597]: W1213 01:27:31.902992 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.903228 kubelet[2597]: E1213 01:27:31.903127 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.903272 kubelet[2597]: E1213 01:27:31.903262 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.903344 kubelet[2597]: W1213 01:27:31.903274 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.903471 kubelet[2597]: E1213 01:27:31.903409 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.903724 kubelet[2597]: E1213 01:27:31.903695 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.903724 kubelet[2597]: W1213 01:27:31.903719 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.904018 kubelet[2597]: E1213 01:27:31.903737 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.904354 kubelet[2597]: E1213 01:27:31.904065 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.904354 kubelet[2597]: W1213 01:27:31.904088 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.904802 kubelet[2597]: E1213 01:27:31.904751 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.905625 kubelet[2597]: E1213 01:27:31.905434 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.905625 kubelet[2597]: W1213 01:27:31.905453 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.905625 kubelet[2597]: E1213 01:27:31.905468 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.905796 kubelet[2597]: E1213 01:27:31.905763 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.905796 kubelet[2597]: W1213 01:27:31.905789 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.905862 kubelet[2597]: E1213 01:27:31.905804 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.907439 kubelet[2597]: E1213 01:27:31.906130 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:31.907439 kubelet[2597]: W1213 01:27:31.906148 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:31.907439 kubelet[2597]: E1213 01:27:31.906166 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:31.914454 kubelet[2597]: E1213 01:27:31.914392 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:31.915418 containerd[1466]: time="2024-12-13T01:27:31.915353303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bcdb4899-7vfxg,Uid:cc2f5fdb-5130-4ebd-8ec5-4136a8981aab,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:31.953942 containerd[1466]: time="2024-12-13T01:27:31.953688007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:31.953942 containerd[1466]: time="2024-12-13T01:27:31.953784108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:31.953942 containerd[1466]: time="2024-12-13T01:27:31.953822340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:31.955727 containerd[1466]: time="2024-12-13T01:27:31.955609108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:31.985626 systemd[1]: Started cri-containerd-d51ba2f1199d3c3e888b3b3ce25f7896cc8ea08bfa56ecc487d3fd25adc1233d.scope - libcontainer container d51ba2f1199d3c3e888b3b3ce25f7896cc8ea08bfa56ecc487d3fd25adc1233d. Dec 13 01:27:31.992950 kubelet[2597]: E1213 01:27:31.992894 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:31.995131 containerd[1466]: time="2024-12-13T01:27:31.995077017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hmrqg,Uid:78bc9f6f-9978-40c0-9dc0-da6219d70dd6,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:32.002601 kubelet[2597]: E1213 01:27:32.002548 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.002601 kubelet[2597]: W1213 01:27:32.002585 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.002601 kubelet[2597]: E1213 01:27:32.002614 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.014836 kubelet[2597]: E1213 01:27:32.014508 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.014836 kubelet[2597]: W1213 01:27:32.014551 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.014836 kubelet[2597]: E1213 01:27:32.014593 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.015497 kubelet[2597]: E1213 01:27:32.015277 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.015497 kubelet[2597]: W1213 01:27:32.015310 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.015497 kubelet[2597]: E1213 01:27:32.015355 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.017494 kubelet[2597]: E1213 01:27:32.017461 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.017856 kubelet[2597]: W1213 01:27:32.017627 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.017856 kubelet[2597]: E1213 01:27:32.017663 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.018030 kubelet[2597]: E1213 01:27:32.018015 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.018333 kubelet[2597]: W1213 01:27:32.018100 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.018333 kubelet[2597]: E1213 01:27:32.018120 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.018481 kubelet[2597]: E1213 01:27:32.018467 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.018543 kubelet[2597]: W1213 01:27:32.018524 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.018599 kubelet[2597]: E1213 01:27:32.018590 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.028322 kubelet[2597]: E1213 01:27:32.025911 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.029492 kubelet[2597]: W1213 01:27:32.029141 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.029492 kubelet[2597]: E1213 01:27:32.029216 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.031524 kubelet[2597]: E1213 01:27:32.031419 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.031524 kubelet[2597]: W1213 01:27:32.031450 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.031954 kubelet[2597]: E1213 01:27:32.031933 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.032685 kubelet[2597]: E1213 01:27:32.032650 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.032740 kubelet[2597]: W1213 01:27:32.032685 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.032740 kubelet[2597]: E1213 01:27:32.032731 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.033244 kubelet[2597]: E1213 01:27:32.033224 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.033244 kubelet[2597]: W1213 01:27:32.033241 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.033343 kubelet[2597]: E1213 01:27:32.033318 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.033635 kubelet[2597]: E1213 01:27:32.033614 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.033635 kubelet[2597]: W1213 01:27:32.033630 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.033736 kubelet[2597]: E1213 01:27:32.033675 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.034186 kubelet[2597]: E1213 01:27:32.034146 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.034186 kubelet[2597]: W1213 01:27:32.034169 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.034263 kubelet[2597]: E1213 01:27:32.034209 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.034463 kubelet[2597]: E1213 01:27:32.034443 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.034463 kubelet[2597]: W1213 01:27:32.034460 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.034564 kubelet[2597]: E1213 01:27:32.034501 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.034822 kubelet[2597]: E1213 01:27:32.034800 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.034822 kubelet[2597]: W1213 01:27:32.034818 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.034913 kubelet[2597]: E1213 01:27:32.034860 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.035107 kubelet[2597]: E1213 01:27:32.035075 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.035107 kubelet[2597]: W1213 01:27:32.035091 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.035214 kubelet[2597]: E1213 01:27:32.035187 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.035728 kubelet[2597]: E1213 01:27:32.035704 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.035728 kubelet[2597]: W1213 01:27:32.035717 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.035809 kubelet[2597]: E1213 01:27:32.035794 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.035985 kubelet[2597]: E1213 01:27:32.035955 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.035985 kubelet[2597]: W1213 01:27:32.035968 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.036101 kubelet[2597]: E1213 01:27:32.036084 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.036444 kubelet[2597]: E1213 01:27:32.036424 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.036444 kubelet[2597]: W1213 01:27:32.036437 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.036554 kubelet[2597]: E1213 01:27:32.036536 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.038416 kubelet[2597]: E1213 01:27:32.038392 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.038416 kubelet[2597]: W1213 01:27:32.038411 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.038690 kubelet[2597]: E1213 01:27:32.038602 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.039559 kubelet[2597]: E1213 01:27:32.039512 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.039559 kubelet[2597]: W1213 01:27:32.039524 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.040210 kubelet[2597]: E1213 01:27:32.039923 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.040748 kubelet[2597]: E1213 01:27:32.040526 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.040748 kubelet[2597]: W1213 01:27:32.040583 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.041129 kubelet[2597]: E1213 01:27:32.040977 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.041268 kubelet[2597]: E1213 01:27:32.041241 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.041345 kubelet[2597]: W1213 01:27:32.041334 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.041538 kubelet[2597]: E1213 01:27:32.041509 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.042096 kubelet[2597]: E1213 01:27:32.042044 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.042096 kubelet[2597]: W1213 01:27:32.042071 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.042363 kubelet[2597]: E1213 01:27:32.042310 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.042864 kubelet[2597]: E1213 01:27:32.042730 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.042864 kubelet[2597]: W1213 01:27:32.042743 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.042994 kubelet[2597]: E1213 01:27:32.042981 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.043730 kubelet[2597]: E1213 01:27:32.043668 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.043730 kubelet[2597]: W1213 01:27:32.043683 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.043730 kubelet[2597]: E1213 01:27:32.043699 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.052981 kubelet[2597]: E1213 01:27:32.052792 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:32.052981 kubelet[2597]: W1213 01:27:32.052823 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:32.052981 kubelet[2597]: E1213 01:27:32.052853 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:32.064491 containerd[1466]: time="2024-12-13T01:27:32.064322484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:32.066274 containerd[1466]: time="2024-12-13T01:27:32.065915606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:32.066274 containerd[1466]: time="2024-12-13T01:27:32.065983063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:32.066425 containerd[1466]: time="2024-12-13T01:27:32.066150109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:32.092587 systemd[1]: Started cri-containerd-9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f.scope - libcontainer container 9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f. Dec 13 01:27:32.119899 containerd[1466]: time="2024-12-13T01:27:32.119847566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bcdb4899-7vfxg,Uid:cc2f5fdb-5130-4ebd-8ec5-4136a8981aab,Namespace:calico-system,Attempt:0,} returns sandbox id \"d51ba2f1199d3c3e888b3b3ce25f7896cc8ea08bfa56ecc487d3fd25adc1233d\"" Dec 13 01:27:32.123072 kubelet[2597]: E1213 01:27:32.123020 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:32.125990 containerd[1466]: time="2024-12-13T01:27:32.125784593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:27:32.142324 containerd[1466]: time="2024-12-13T01:27:32.142023600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hmrqg,Uid:78bc9f6f-9978-40c0-9dc0-da6219d70dd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f\"" Dec 13 01:27:32.145185 kubelet[2597]: E1213 01:27:32.143785 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:33.883610 kubelet[2597]: E1213 01:27:33.883546 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54ctg" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" Dec 13 01:27:33.922215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767036218.mount: Deactivated successfully. Dec 13 01:27:34.415715 containerd[1466]: time="2024-12-13T01:27:34.415671155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.416604 containerd[1466]: time="2024-12-13T01:27:34.416544329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:27:34.417957 containerd[1466]: time="2024-12-13T01:27:34.417910302Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.420039 containerd[1466]: time="2024-12-13T01:27:34.419989307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.420567 containerd[1466]: time="2024-12-13T01:27:34.420532500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.294548722s" Dec 13 01:27:34.420567 containerd[1466]: time="2024-12-13T01:27:34.420563388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:27:34.421062 containerd[1466]: time="2024-12-13T01:27:34.421032702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:27:34.436636 containerd[1466]: time="2024-12-13T01:27:34.436586071Z" level=info msg="CreateContainer within sandbox \"d51ba2f1199d3c3e888b3b3ce25f7896cc8ea08bfa56ecc487d3fd25adc1233d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:27:34.449151 containerd[1466]: time="2024-12-13T01:27:34.449097652Z" level=info msg="CreateContainer within sandbox \"d51ba2f1199d3c3e888b3b3ce25f7896cc8ea08bfa56ecc487d3fd25adc1233d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a7c18750d8f582152cf7829721a5e6d3aca4f364560238683f26e1bf5cff8050\"" Dec 13 01:27:34.449776 containerd[1466]: time="2024-12-13T01:27:34.449702742Z" level=info msg="StartContainer for \"a7c18750d8f582152cf7829721a5e6d3aca4f364560238683f26e1bf5cff8050\"" Dec 13 01:27:34.479418 systemd[1]: Started cri-containerd-a7c18750d8f582152cf7829721a5e6d3aca4f364560238683f26e1bf5cff8050.scope - libcontainer container a7c18750d8f582152cf7829721a5e6d3aca4f364560238683f26e1bf5cff8050. Dec 13 01:27:34.520616 containerd[1466]: time="2024-12-13T01:27:34.520561820Z" level=info msg="StartContainer for \"a7c18750d8f582152cf7829721a5e6d3aca4f364560238683f26e1bf5cff8050\" returns successfully" Dec 13 01:27:34.951432 kubelet[2597]: E1213 01:27:34.951390 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:34.974854 kubelet[2597]: I1213 01:27:34.974815 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-bcdb4899-7vfxg" podStartSLOduration=1.679008013 podStartE2EDuration="3.974773437s" podCreationTimestamp="2024-12-13 01:27:31 +0000 UTC" firstStartedPulling="2024-12-13 01:27:32.125068844 +0000 UTC m=+20.370393939" lastFinishedPulling="2024-12-13 01:27:34.420834268 +0000 UTC m=+22.666159363" observedRunningTime="2024-12-13 01:27:34.974099699 +0000 UTC m=+23.219424794" watchObservedRunningTime="2024-12-13 01:27:34.974773437 +0000 UTC m=+23.220098532" Dec 13 01:27:35.020658 kubelet[2597]: E1213 01:27:35.020608 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.020658 kubelet[2597]: W1213 01:27:35.020633 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.020658 kubelet[2597]: E1213 01:27:35.020659 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.020942 kubelet[2597]: E1213 01:27:35.020924 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.020942 kubelet[2597]: W1213 01:27:35.020937 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.021012 kubelet[2597]: E1213 01:27:35.020953 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.021227 kubelet[2597]: E1213 01:27:35.021195 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.021227 kubelet[2597]: W1213 01:27:35.021208 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.021227 kubelet[2597]: E1213 01:27:35.021219 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.021455 kubelet[2597]: E1213 01:27:35.021429 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.021455 kubelet[2597]: W1213 01:27:35.021441 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.021455 kubelet[2597]: E1213 01:27:35.021453 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.021693 kubelet[2597]: E1213 01:27:35.021662 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.021693 kubelet[2597]: W1213 01:27:35.021681 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.021693 kubelet[2597]: E1213 01:27:35.021692 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.021875 kubelet[2597]: E1213 01:27:35.021860 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.021875 kubelet[2597]: W1213 01:27:35.021870 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.021929 kubelet[2597]: E1213 01:27:35.021882 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.022118 kubelet[2597]: E1213 01:27:35.022097 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.022118 kubelet[2597]: W1213 01:27:35.022108 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.022118 kubelet[2597]: E1213 01:27:35.022118 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.022331 kubelet[2597]: E1213 01:27:35.022317 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.022331 kubelet[2597]: W1213 01:27:35.022327 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.022393 kubelet[2597]: E1213 01:27:35.022336 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.022508 kubelet[2597]: E1213 01:27:35.022494 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.022508 kubelet[2597]: W1213 01:27:35.022504 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.022624 kubelet[2597]: E1213 01:27:35.022513 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.022719 kubelet[2597]: E1213 01:27:35.022705 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.022749 kubelet[2597]: W1213 01:27:35.022714 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.022749 kubelet[2597]: E1213 01:27:35.022735 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.022935 kubelet[2597]: E1213 01:27:35.022921 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.022935 kubelet[2597]: W1213 01:27:35.022931 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.022988 kubelet[2597]: E1213 01:27:35.022940 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.023155 kubelet[2597]: E1213 01:27:35.023141 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.023155 kubelet[2597]: W1213 01:27:35.023151 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.023204 kubelet[2597]: E1213 01:27:35.023161 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.023342 kubelet[2597]: E1213 01:27:35.023328 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.023342 kubelet[2597]: W1213 01:27:35.023337 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.023406 kubelet[2597]: E1213 01:27:35.023346 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.023551 kubelet[2597]: E1213 01:27:35.023520 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.023551 kubelet[2597]: W1213 01:27:35.023530 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.023551 kubelet[2597]: E1213 01:27:35.023539 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.023724 kubelet[2597]: E1213 01:27:35.023709 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.023724 kubelet[2597]: W1213 01:27:35.023720 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.023778 kubelet[2597]: E1213 01:27:35.023730 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.038929 kubelet[2597]: E1213 01:27:35.038897 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.038929 kubelet[2597]: W1213 01:27:35.038917 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.038929 kubelet[2597]: E1213 01:27:35.038935 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.039191 kubelet[2597]: E1213 01:27:35.039175 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.039191 kubelet[2597]: W1213 01:27:35.039187 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.039253 kubelet[2597]: E1213 01:27:35.039202 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.039465 kubelet[2597]: E1213 01:27:35.039439 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.039465 kubelet[2597]: W1213 01:27:35.039459 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.039534 kubelet[2597]: E1213 01:27:35.039481 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.039751 kubelet[2597]: E1213 01:27:35.039716 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.039751 kubelet[2597]: W1213 01:27:35.039743 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.039804 kubelet[2597]: E1213 01:27:35.039767 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.040079 kubelet[2597]: E1213 01:27:35.040063 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.040079 kubelet[2597]: W1213 01:27:35.040075 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.040141 kubelet[2597]: E1213 01:27:35.040096 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.040346 kubelet[2597]: E1213 01:27:35.040329 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.040346 kubelet[2597]: W1213 01:27:35.040342 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.040405 kubelet[2597]: E1213 01:27:35.040359 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.040579 kubelet[2597]: E1213 01:27:35.040563 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.040579 kubelet[2597]: W1213 01:27:35.040575 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.040642 kubelet[2597]: E1213 01:27:35.040621 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.040801 kubelet[2597]: E1213 01:27:35.040787 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.040801 kubelet[2597]: W1213 01:27:35.040799 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.040851 kubelet[2597]: E1213 01:27:35.040836 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.041019 kubelet[2597]: E1213 01:27:35.040991 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.041019 kubelet[2597]: W1213 01:27:35.041016 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.041064 kubelet[2597]: E1213 01:27:35.041049 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.041243 kubelet[2597]: E1213 01:27:35.041227 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.041243 kubelet[2597]: W1213 01:27:35.041240 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.041319 kubelet[2597]: E1213 01:27:35.041257 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.041560 kubelet[2597]: E1213 01:27:35.041534 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.041560 kubelet[2597]: W1213 01:27:35.041551 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.041609 kubelet[2597]: E1213 01:27:35.041571 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.041811 kubelet[2597]: E1213 01:27:35.041787 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.041811 kubelet[2597]: W1213 01:27:35.041801 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.041872 kubelet[2597]: E1213 01:27:35.041817 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.042078 kubelet[2597]: E1213 01:27:35.042053 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.042078 kubelet[2597]: W1213 01:27:35.042069 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.042133 kubelet[2597]: E1213 01:27:35.042086 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.042409 kubelet[2597]: E1213 01:27:35.042390 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.042409 kubelet[2597]: W1213 01:27:35.042406 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.042474 kubelet[2597]: E1213 01:27:35.042426 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.042669 kubelet[2597]: E1213 01:27:35.042645 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.042669 kubelet[2597]: W1213 01:27:35.042659 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.042720 kubelet[2597]: E1213 01:27:35.042679 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.042909 kubelet[2597]: E1213 01:27:35.042892 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.042909 kubelet[2597]: W1213 01:27:35.042905 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.042958 kubelet[2597]: E1213 01:27:35.042924 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.043187 kubelet[2597]: E1213 01:27:35.043162 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.043187 kubelet[2597]: W1213 01:27:35.043181 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.043230 kubelet[2597]: E1213 01:27:35.043204 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.043471 kubelet[2597]: E1213 01:27:35.043453 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:35.043471 kubelet[2597]: W1213 01:27:35.043469 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:35.043544 kubelet[2597]: E1213 01:27:35.043487 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:35.875643 containerd[1466]: time="2024-12-13T01:27:35.875565016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:35.876876 containerd[1466]: time="2024-12-13T01:27:35.876828184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:27:35.878584 containerd[1466]: time="2024-12-13T01:27:35.878528796Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:35.881994 containerd[1466]: time="2024-12-13T01:27:35.881945338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:35.882581 containerd[1466]: time="2024-12-13T01:27:35.882522735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.461460077s" Dec 13 01:27:35.882581 containerd[1466]: time="2024-12-13T01:27:35.882574142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:27:35.883309 kubelet[2597]: E1213 01:27:35.883132 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54ctg" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" Dec 13 01:27:35.891247 containerd[1466]: time="2024-12-13T01:27:35.890455249Z" level=info msg="CreateContainer within sandbox \"9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:27:35.952129 kubelet[2597]: I1213 01:27:35.952071 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:35.952829 kubelet[2597]: E1213 01:27:35.952808 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:35.957537 containerd[1466]: time="2024-12-13T01:27:35.957480605Z" level=info msg="CreateContainer within sandbox \"9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96\"" Dec 13 01:27:35.959610 containerd[1466]: time="2024-12-13T01:27:35.959574126Z" level=info msg="StartContainer for \"082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96\"" Dec 13 01:27:35.995533 systemd[1]: Started cri-containerd-082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96.scope - libcontainer container 082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96. Dec 13 01:27:36.030162 kubelet[2597]: E1213 01:27:36.030033 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.030162 kubelet[2597]: W1213 01:27:36.030060 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.030162 kubelet[2597]: E1213 01:27:36.030088 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033035 kubelet[2597]: E1213 01:27:36.031093 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033035 kubelet[2597]: W1213 01:27:36.031108 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033035 kubelet[2597]: E1213 01:27:36.031126 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033035 kubelet[2597]: E1213 01:27:36.031386 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033035 kubelet[2597]: W1213 01:27:36.031396 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033035 kubelet[2597]: E1213 01:27:36.031410 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033035 kubelet[2597]: E1213 01:27:36.031763 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033035 kubelet[2597]: W1213 01:27:36.031775 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033035 kubelet[2597]: E1213 01:27:36.031791 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033035 kubelet[2597]: E1213 01:27:36.032046 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033479 kubelet[2597]: W1213 01:27:36.032056 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033479 kubelet[2597]: E1213 01:27:36.032069 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033479 kubelet[2597]: E1213 01:27:36.032388 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033479 kubelet[2597]: W1213 01:27:36.032399 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033479 kubelet[2597]: E1213 01:27:36.032414 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033479 kubelet[2597]: E1213 01:27:36.032715 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033479 kubelet[2597]: W1213 01:27:36.032726 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033479 kubelet[2597]: E1213 01:27:36.032771 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033479 kubelet[2597]: E1213 01:27:36.033089 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033479 kubelet[2597]: W1213 01:27:36.033098 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033800 kubelet[2597]: E1213 01:27:36.033112 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033800 kubelet[2597]: E1213 01:27:36.033362 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033800 kubelet[2597]: W1213 01:27:36.033371 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033800 kubelet[2597]: E1213 01:27:36.033430 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.033800 kubelet[2597]: E1213 01:27:36.033661 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.033800 kubelet[2597]: W1213 01:27:36.033670 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.033800 kubelet[2597]: E1213 01:27:36.033683 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.034045 kubelet[2597]: E1213 01:27:36.033913 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.034045 kubelet[2597]: W1213 01:27:36.033933 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.034045 kubelet[2597]: E1213 01:27:36.033947 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.034325 kubelet[2597]: E1213 01:27:36.034194 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.034325 kubelet[2597]: W1213 01:27:36.034235 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.034325 kubelet[2597]: E1213 01:27:36.034252 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.034622 kubelet[2597]: E1213 01:27:36.034593 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.034622 kubelet[2597]: W1213 01:27:36.034608 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.034622 kubelet[2597]: E1213 01:27:36.034622 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.034990 kubelet[2597]: E1213 01:27:36.034963 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.034990 kubelet[2597]: W1213 01:27:36.034988 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.035056 kubelet[2597]: E1213 01:27:36.035029 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.035412 kubelet[2597]: E1213 01:27:36.035389 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.035484 kubelet[2597]: W1213 01:27:36.035436 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.035484 kubelet[2597]: E1213 01:27:36.035453 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.048250 kubelet[2597]: E1213 01:27:36.048087 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.048250 kubelet[2597]: W1213 01:27:36.048234 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.048513 kubelet[2597]: E1213 01:27:36.048281 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.049272 kubelet[2597]: E1213 01:27:36.049241 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.049272 kubelet[2597]: W1213 01:27:36.049255 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.049408 kubelet[2597]: E1213 01:27:36.049272 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.049958 kubelet[2597]: E1213 01:27:36.049932 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.050141 kubelet[2597]: W1213 01:27:36.050102 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.050533 kubelet[2597]: E1213 01:27:36.050512 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.051471 kubelet[2597]: E1213 01:27:36.050899 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.051471 kubelet[2597]: W1213 01:27:36.050909 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.051471 kubelet[2597]: E1213 01:27:36.050925 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.051471 kubelet[2597]: E1213 01:27:36.051346 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.051471 kubelet[2597]: W1213 01:27:36.051371 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.051471 kubelet[2597]: E1213 01:27:36.051405 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.051865 kubelet[2597]: E1213 01:27:36.051670 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.051865 kubelet[2597]: W1213 01:27:36.051682 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.051865 kubelet[2597]: E1213 01:27:36.051702 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.052889 kubelet[2597]: E1213 01:27:36.052016 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.052889 kubelet[2597]: W1213 01:27:36.052030 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.052889 kubelet[2597]: E1213 01:27:36.052063 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.052889 kubelet[2597]: E1213 01:27:36.052351 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.052889 kubelet[2597]: W1213 01:27:36.052362 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.052889 kubelet[2597]: E1213 01:27:36.052420 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.052889 kubelet[2597]: E1213 01:27:36.052596 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.052889 kubelet[2597]: W1213 01:27:36.052608 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.052889 kubelet[2597]: E1213 01:27:36.052674 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.052889 kubelet[2597]: E1213 01:27:36.052855 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.053211 kubelet[2597]: W1213 01:27:36.052865 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.053211 kubelet[2597]: E1213 01:27:36.052888 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.053501 kubelet[2597]: E1213 01:27:36.053483 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.053501 kubelet[2597]: W1213 01:27:36.053495 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.053588 kubelet[2597]: E1213 01:27:36.053515 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.054276 kubelet[2597]: E1213 01:27:36.054139 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.054276 kubelet[2597]: W1213 01:27:36.054151 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.054276 kubelet[2597]: E1213 01:27:36.054169 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.054593 kubelet[2597]: E1213 01:27:36.054515 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.054593 kubelet[2597]: W1213 01:27:36.054525 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.054593 kubelet[2597]: E1213 01:27:36.054551 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.054857 kubelet[2597]: E1213 01:27:36.054838 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.054857 kubelet[2597]: W1213 01:27:36.054854 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.054948 kubelet[2597]: E1213 01:27:36.054937 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.055101 kubelet[2597]: E1213 01:27:36.055083 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.055101 kubelet[2597]: W1213 01:27:36.055096 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.055174 kubelet[2597]: E1213 01:27:36.055151 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.055646 kubelet[2597]: E1213 01:27:36.055595 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.055646 kubelet[2597]: W1213 01:27:36.055637 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.055741 kubelet[2597]: E1213 01:27:36.055679 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.057316 kubelet[2597]: E1213 01:27:36.056662 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.057316 kubelet[2597]: W1213 01:27:36.056681 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.057316 kubelet[2597]: E1213 01:27:36.056713 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.057316 kubelet[2597]: E1213 01:27:36.057026 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:36.057316 kubelet[2597]: W1213 01:27:36.057037 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:36.057316 kubelet[2597]: E1213 01:27:36.057052 2597 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:36.061715 systemd[1]: cri-containerd-082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96.scope: Deactivated successfully. Dec 13 01:27:36.123908 containerd[1466]: time="2024-12-13T01:27:36.123843644Z" level=info msg="StartContainer for \"082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96\" returns successfully" Dec 13 01:27:36.152150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96-rootfs.mount: Deactivated successfully. Dec 13 01:27:36.413768 containerd[1466]: time="2024-12-13T01:27:36.413588012Z" level=info msg="shim disconnected" id=082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96 namespace=k8s.io Dec 13 01:27:36.413768 containerd[1466]: time="2024-12-13T01:27:36.413665608Z" level=warning msg="cleaning up after shim disconnected" id=082274dd9a7f948a9a053f7bb4b3b6fa2af32c59068e87ef9cc0372fe43f2b96 namespace=k8s.io Dec 13 01:27:36.413768 containerd[1466]: time="2024-12-13T01:27:36.413677310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:36.957992 kubelet[2597]: E1213 01:27:36.957913 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:36.959188 containerd[1466]: time="2024-12-13T01:27:36.959071854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:27:37.883853 kubelet[2597]: E1213 01:27:37.883795 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54ctg" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" Dec 13 01:27:39.883427 kubelet[2597]: E1213 01:27:39.883375 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54ctg" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" Dec 13 01:27:40.521866 containerd[1466]: time="2024-12-13T01:27:40.521795083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:40.522593 containerd[1466]: time="2024-12-13T01:27:40.522519956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:27:40.523862 containerd[1466]: time="2024-12-13T01:27:40.523820001Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:40.526210 containerd[1466]: time="2024-12-13T01:27:40.526153159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:40.526916 containerd[1466]: time="2024-12-13T01:27:40.526857784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.567738812s" Dec 13 01:27:40.526916 containerd[1466]: time="2024-12-13T01:27:40.526900133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:27:40.529328 containerd[1466]: time="2024-12-13T01:27:40.529259321Z" level=info msg="CreateContainer within sandbox \"9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:27:40.546556 containerd[1466]: time="2024-12-13T01:27:40.546490686Z" level=info msg="CreateContainer within sandbox \"9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70\"" Dec 13 01:27:40.547286 containerd[1466]: time="2024-12-13T01:27:40.547248191Z" level=info msg="StartContainer for \"bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70\"" Dec 13 01:27:40.589679 systemd[1]: Started cri-containerd-bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70.scope - libcontainer container bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70. Dec 13 01:27:40.628868 containerd[1466]: time="2024-12-13T01:27:40.628806523Z" level=info msg="StartContainer for \"bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70\" returns successfully" Dec 13 01:27:40.967367 kubelet[2597]: E1213 01:27:40.967322 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:41.884435 kubelet[2597]: E1213 01:27:41.883274 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54ctg" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" Dec 13 01:27:41.960866 systemd[1]: cri-containerd-bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70.scope: Deactivated successfully. Dec 13 01:27:41.969190 kubelet[2597]: E1213 01:27:41.969165 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:41.987379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70-rootfs.mount: Deactivated successfully. Dec 13 01:27:42.037606 kubelet[2597]: I1213 01:27:42.037567 2597 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:27:42.177285 kubelet[2597]: I1213 01:27:42.177090 2597 topology_manager.go:215] "Topology Admit Handler" podUID="ce304b83-f30f-46db-bfb6-971554b60429" podNamespace="calico-system" podName="calico-kube-controllers-674bcff85f-qlkvk" Dec 13 01:27:42.181130 kubelet[2597]: I1213 01:27:42.181082 2597 topology_manager.go:215] "Topology Admit Handler" podUID="9f955d26-c47f-4a21-b33a-e3a989a3e532" podNamespace="kube-system" podName="coredns-76f75df574-mfv2r" Dec 13 01:27:42.184738 kubelet[2597]: I1213 01:27:42.184693 2597 topology_manager.go:215] "Topology Admit Handler" podUID="fecc8a64-c7e5-403b-881c-5253c8b42a23" podNamespace="calico-apiserver" podName="calico-apiserver-7bb84f74c-x8pkp" Dec 13 01:27:42.187372 kubelet[2597]: I1213 01:27:42.184917 2597 topology_manager.go:215] "Topology Admit Handler" podUID="693e3e7a-b788-4c48-8270-e5f57917bed1" podNamespace="calico-apiserver" podName="calico-apiserver-7bb84f74c-7wpxw" Dec 13 01:27:42.188315 kubelet[2597]: I1213 01:27:42.187885 2597 topology_manager.go:215] "Topology Admit Handler" podUID="c537e295-f131-421b-b6e3-16e9b31f1282" podNamespace="kube-system" podName="coredns-76f75df574-x4qc6" Dec 13 01:27:42.193127 kubelet[2597]: I1213 01:27:42.193072 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce304b83-f30f-46db-bfb6-971554b60429-tigera-ca-bundle\") pod \"calico-kube-controllers-674bcff85f-qlkvk\" (UID: \"ce304b83-f30f-46db-bfb6-971554b60429\") " pod="calico-system/calico-kube-controllers-674bcff85f-qlkvk" Dec 13 01:27:42.193893 kubelet[2597]: I1213 01:27:42.193143 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjw9h\" (UniqueName: \"kubernetes.io/projected/ce304b83-f30f-46db-bfb6-971554b60429-kube-api-access-zjw9h\") pod \"calico-kube-controllers-674bcff85f-qlkvk\" (UID: \"ce304b83-f30f-46db-bfb6-971554b60429\") " pod="calico-system/calico-kube-controllers-674bcff85f-qlkvk" Dec 13 01:27:42.193714 systemd[1]: Created slice kubepods-besteffort-podce304b83_f30f_46db_bfb6_971554b60429.slice - libcontainer container kubepods-besteffort-podce304b83_f30f_46db_bfb6_971554b60429.slice. Dec 13 01:27:42.203357 systemd[1]: Created slice kubepods-burstable-pod9f955d26_c47f_4a21_b33a_e3a989a3e532.slice - libcontainer container kubepods-burstable-pod9f955d26_c47f_4a21_b33a_e3a989a3e532.slice. Dec 13 01:27:42.208786 systemd[1]: Created slice kubepods-besteffort-podfecc8a64_c7e5_403b_881c_5253c8b42a23.slice - libcontainer container kubepods-besteffort-podfecc8a64_c7e5_403b_881c_5253c8b42a23.slice. Dec 13 01:27:42.216943 systemd[1]: Created slice kubepods-besteffort-pod693e3e7a_b788_4c48_8270_e5f57917bed1.slice - libcontainer container kubepods-besteffort-pod693e3e7a_b788_4c48_8270_e5f57917bed1.slice. Dec 13 01:27:42.293769 kubelet[2597]: I1213 01:27:42.293691 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv62g\" (UniqueName: \"kubernetes.io/projected/9f955d26-c47f-4a21-b33a-e3a989a3e532-kube-api-access-qv62g\") pod \"coredns-76f75df574-mfv2r\" (UID: \"9f955d26-c47f-4a21-b33a-e3a989a3e532\") " pod="kube-system/coredns-76f75df574-mfv2r" Dec 13 01:27:42.293769 kubelet[2597]: I1213 01:27:42.293746 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nfgh\" (UniqueName: \"kubernetes.io/projected/693e3e7a-b788-4c48-8270-e5f57917bed1-kube-api-access-6nfgh\") pod \"calico-apiserver-7bb84f74c-7wpxw\" (UID: \"693e3e7a-b788-4c48-8270-e5f57917bed1\") " pod="calico-apiserver/calico-apiserver-7bb84f74c-7wpxw" Dec 13 01:27:42.293769 kubelet[2597]: I1213 01:27:42.293769 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c537e295-f131-421b-b6e3-16e9b31f1282-config-volume\") pod \"coredns-76f75df574-x4qc6\" (UID: \"c537e295-f131-421b-b6e3-16e9b31f1282\") " pod="kube-system/coredns-76f75df574-x4qc6" Dec 13 01:27:42.293769 kubelet[2597]: I1213 01:27:42.293792 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf2wv\" (UniqueName: \"kubernetes.io/projected/fecc8a64-c7e5-403b-881c-5253c8b42a23-kube-api-access-gf2wv\") pod \"calico-apiserver-7bb84f74c-x8pkp\" (UID: \"fecc8a64-c7e5-403b-881c-5253c8b42a23\") " pod="calico-apiserver/calico-apiserver-7bb84f74c-x8pkp" Dec 13 01:27:42.294087 kubelet[2597]: I1213 01:27:42.293918 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f955d26-c47f-4a21-b33a-e3a989a3e532-config-volume\") pod \"coredns-76f75df574-mfv2r\" (UID: \"9f955d26-c47f-4a21-b33a-e3a989a3e532\") " pod="kube-system/coredns-76f75df574-mfv2r" Dec 13 01:27:42.294087 kubelet[2597]: I1213 01:27:42.293995 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/693e3e7a-b788-4c48-8270-e5f57917bed1-calico-apiserver-certs\") pod \"calico-apiserver-7bb84f74c-7wpxw\" (UID: \"693e3e7a-b788-4c48-8270-e5f57917bed1\") " pod="calico-apiserver/calico-apiserver-7bb84f74c-7wpxw" Dec 13 01:27:42.294087 kubelet[2597]: I1213 01:27:42.294020 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fecc8a64-c7e5-403b-881c-5253c8b42a23-calico-apiserver-certs\") pod \"calico-apiserver-7bb84f74c-x8pkp\" (UID: \"fecc8a64-c7e5-403b-881c-5253c8b42a23\") " pod="calico-apiserver/calico-apiserver-7bb84f74c-x8pkp" Dec 13 01:27:42.294087 kubelet[2597]: I1213 01:27:42.294068 2597 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxp5\" (UniqueName: \"kubernetes.io/projected/c537e295-f131-421b-b6e3-16e9b31f1282-kube-api-access-lsxp5\") pod \"coredns-76f75df574-x4qc6\" (UID: \"c537e295-f131-421b-b6e3-16e9b31f1282\") " pod="kube-system/coredns-76f75df574-x4qc6" Dec 13 01:27:42.334813 systemd[1]: Created slice kubepods-burstable-podc537e295_f131_421b_b6e3_16e9b31f1282.slice - libcontainer container kubepods-burstable-podc537e295_f131_421b_b6e3_16e9b31f1282.slice. Dec 13 01:27:42.431981 containerd[1466]: time="2024-12-13T01:27:42.431781816Z" level=info msg="shim disconnected" id=bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70 namespace=k8s.io Dec 13 01:27:42.431981 containerd[1466]: time="2024-12-13T01:27:42.431879279Z" level=warning msg="cleaning up after shim disconnected" id=bb4976755695be9f4435d9fcefdedde244188ae6b9614d0633fe5c7ece917e70 namespace=k8s.io Dec 13 01:27:42.431981 containerd[1466]: time="2024-12-13T01:27:42.431890991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:42.472322 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:39894.service - OpenSSH per-connection server daemon (10.0.0.1:39894). Dec 13 01:27:42.498156 containerd[1466]: time="2024-12-13T01:27:42.498113159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-674bcff85f-qlkvk,Uid:ce304b83-f30f-46db-bfb6-971554b60429,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:42.506079 kubelet[2597]: E1213 01:27:42.506023 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:42.506893 containerd[1466]: time="2024-12-13T01:27:42.506819387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mfv2r,Uid:9f955d26-c47f-4a21-b33a-e3a989a3e532,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:42.516444 containerd[1466]: time="2024-12-13T01:27:42.516390711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb84f74c-x8pkp,Uid:fecc8a64-c7e5-403b-881c-5253c8b42a23,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:27:42.520216 containerd[1466]: time="2024-12-13T01:27:42.520181368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb84f74c-7wpxw,Uid:693e3e7a-b788-4c48-8270-e5f57917bed1,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:27:42.521237 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 39894 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:42.523242 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:42.532606 systemd-logind[1455]: New session 8 of user core. Dec 13 01:27:42.544464 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:27:42.638918 kubelet[2597]: E1213 01:27:42.638524 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:42.639745 containerd[1466]: time="2024-12-13T01:27:42.639700114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x4qc6,Uid:c537e295-f131-421b-b6e3-16e9b31f1282,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:42.741404 sshd[3455]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:42.745582 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:39894.service: Deactivated successfully. Dec 13 01:27:42.747683 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:27:42.748335 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:27:42.749250 systemd-logind[1455]: Removed session 8. Dec 13 01:27:42.952039 containerd[1466]: time="2024-12-13T01:27:42.951855308Z" level=error msg="Failed to destroy network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:42.952850 containerd[1466]: time="2024-12-13T01:27:42.952798320Z" level=error msg="encountered an error cleaning up failed sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:42.953020 containerd[1466]: time="2024-12-13T01:27:42.952970304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-674bcff85f-qlkvk,Uid:ce304b83-f30f-46db-bfb6-971554b60429,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:42.953620 kubelet[2597]: E1213 01:27:42.953591 2597 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:42.954315 kubelet[2597]: E1213 01:27:42.953871 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-674bcff85f-qlkvk" Dec 13 01:27:42.954315 kubelet[2597]: E1213 01:27:42.953907 2597 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-674bcff85f-qlkvk" Dec 13 01:27:42.954459 kubelet[2597]: E1213 01:27:42.954442 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-674bcff85f-qlkvk_calico-system(ce304b83-f30f-46db-bfb6-971554b60429)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-674bcff85f-qlkvk_calico-system(ce304b83-f30f-46db-bfb6-971554b60429)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-674bcff85f-qlkvk" podUID="ce304b83-f30f-46db-bfb6-971554b60429" Dec 13 01:27:42.962806 containerd[1466]: time="2024-12-13T01:27:42.962526980Z" level=error msg="Failed to destroy network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:42.963267 containerd[1466]: time="2024-12-13T01:27:42.963243808Z" level=error msg="encountered an error cleaning up failed sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:42.963405 containerd[1466]: time="2024-12-13T01:27:42.963385243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mfv2r,Uid:9f955d26-c47f-4a21-b33a-e3a989a3e532,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:42.964128 kubelet[2597]: E1213 01:27:42.963718 2597 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:42.964128 kubelet[2597]: E1213 01:27:42.963784 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mfv2r" Dec 13 01:27:42.964128 kubelet[2597]: E1213 01:27:42.963810 2597 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mfv2r" Dec 13 01:27:42.964245 kubelet[2597]: E1213 01:27:42.963897 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mfv2r_kube-system(9f955d26-c47f-4a21-b33a-e3a989a3e532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mfv2r_kube-system(9f955d26-c47f-4a21-b33a-e3a989a3e532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mfv2r" podUID="9f955d26-c47f-4a21-b33a-e3a989a3e532" Dec 13 01:27:42.972980 kubelet[2597]: E1213 01:27:42.972479 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:42.974070 containerd[1466]: time="2024-12-13T01:27:42.974042139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:27:42.979019 kubelet[2597]: I1213 01:27:42.979000 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:27:42.979665 containerd[1466]: time="2024-12-13T01:27:42.979640213Z" level=info msg="StopPodSandbox for \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\"" Dec 13 01:27:42.979984 containerd[1466]: time="2024-12-13T01:27:42.979965525Z" level=info msg="Ensure that sandbox 0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da in task-service has been cleanup successfully" Dec 13 01:27:42.981022 kubelet[2597]: I1213 01:27:42.981008 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:27:42.981661 containerd[1466]: time="2024-12-13T01:27:42.981643860Z" level=info msg="StopPodSandbox for \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\"" Dec 13 01:27:42.981844 containerd[1466]: time="2024-12-13T01:27:42.981816815Z" level=info msg="Ensure that sandbox d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61 in task-service has been cleanup successfully" Dec 13 01:27:42.996854 containerd[1466]: time="2024-12-13T01:27:42.996452079Z" level=error msg="Failed to destroy network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.001790 containerd[1466]: time="2024-12-13T01:27:43.000070973Z" level=error msg="Failed to destroy network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.001790 containerd[1466]: time="2024-12-13T01:27:43.001757433Z" level=error msg="encountered an error cleaning up failed sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.004976 containerd[1466]: time="2024-12-13T01:27:43.001832454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x4qc6,Uid:c537e295-f131-421b-b6e3-16e9b31f1282,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.004976 containerd[1466]: time="2024-12-13T01:27:43.002129753Z" level=error msg="Failed to destroy network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.004976 containerd[1466]: time="2024-12-13T01:27:43.002707289Z" level=error msg="encountered an error cleaning up failed sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.004976 containerd[1466]: time="2024-12-13T01:27:43.002765508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb84f74c-7wpxw,Uid:693e3e7a-b788-4c48-8270-e5f57917bed1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.004976 containerd[1466]: time="2024-12-13T01:27:43.004679516Z" level=error msg="encountered an error cleaning up failed sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.004976 containerd[1466]: time="2024-12-13T01:27:43.004722156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb84f74c-x8pkp,Uid:fecc8a64-c7e5-403b-881c-5253c8b42a23,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.005164 kubelet[2597]: E1213 01:27:43.002599 2597 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.005164 kubelet[2597]: E1213 01:27:43.002648 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-x4qc6" Dec 13 01:27:43.005164 kubelet[2597]: E1213 01:27:43.002669 2597 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-x4qc6" Dec 13 01:27:43.005255 kubelet[2597]: E1213 01:27:43.002718 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-x4qc6_kube-system(c537e295-f131-421b-b6e3-16e9b31f1282)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-x4qc6_kube-system(c537e295-f131-421b-b6e3-16e9b31f1282)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-x4qc6" podUID="c537e295-f131-421b-b6e3-16e9b31f1282" Dec 13 01:27:43.005255 kubelet[2597]: E1213 01:27:43.002952 2597 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.005255 kubelet[2597]: E1213 01:27:43.002985 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb84f74c-7wpxw" Dec 13 01:27:43.005359 kubelet[2597]: E1213 01:27:43.003003 2597 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb84f74c-7wpxw" Dec 13 01:27:43.005359 kubelet[2597]: E1213 01:27:43.003032 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb84f74c-7wpxw_calico-apiserver(693e3e7a-b788-4c48-8270-e5f57917bed1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb84f74c-7wpxw_calico-apiserver(693e3e7a-b788-4c48-8270-e5f57917bed1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb84f74c-7wpxw" podUID="693e3e7a-b788-4c48-8270-e5f57917bed1" Dec 13 01:27:43.005573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4-shm.mount: Deactivated successfully. Dec 13 01:27:43.005695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846-shm.mount: Deactivated successfully. Dec 13 01:27:43.008525 kubelet[2597]: E1213 01:27:43.007833 2597 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.008525 kubelet[2597]: E1213 01:27:43.008121 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb84f74c-x8pkp" Dec 13 01:27:43.008525 kubelet[2597]: E1213 01:27:43.008214 2597 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb84f74c-x8pkp" Dec 13 01:27:43.008410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71-shm.mount: Deactivated successfully. Dec 13 01:27:43.008678 kubelet[2597]: E1213 01:27:43.008510 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb84f74c-x8pkp_calico-apiserver(fecc8a64-c7e5-403b-881c-5253c8b42a23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb84f74c-x8pkp_calico-apiserver(fecc8a64-c7e5-403b-881c-5253c8b42a23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb84f74c-x8pkp" podUID="fecc8a64-c7e5-403b-881c-5253c8b42a23" Dec 13 01:27:43.035640 containerd[1466]: time="2024-12-13T01:27:43.035567085Z" level=error msg="StopPodSandbox for \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\" failed" error="failed to destroy network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.035964 kubelet[2597]: E1213 01:27:43.035924 2597 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:27:43.036066 kubelet[2597]: E1213 01:27:43.036044 2597 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da"} Dec 13 01:27:43.036117 kubelet[2597]: E1213 01:27:43.036099 2597 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce304b83-f30f-46db-bfb6-971554b60429\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:43.036194 kubelet[2597]: E1213 01:27:43.036148 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce304b83-f30f-46db-bfb6-971554b60429\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-674bcff85f-qlkvk" podUID="ce304b83-f30f-46db-bfb6-971554b60429" Dec 13 01:27:43.037580 containerd[1466]: time="2024-12-13T01:27:43.037526990Z" level=error msg="StopPodSandbox for \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\" failed" error="failed to destroy network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.037897 kubelet[2597]: E1213 01:27:43.037877 2597 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:27:43.037954 kubelet[2597]: E1213 01:27:43.037910 2597 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61"} Dec 13 01:27:43.037954 kubelet[2597]: E1213 01:27:43.037948 2597 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f955d26-c47f-4a21-b33a-e3a989a3e532\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:43.038058 kubelet[2597]: E1213 01:27:43.037989 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f955d26-c47f-4a21-b33a-e3a989a3e532\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mfv2r" podUID="9f955d26-c47f-4a21-b33a-e3a989a3e532" Dec 13 01:27:43.889965 systemd[1]: Created slice kubepods-besteffort-podca25e48b_50ec_452e_a7dc_d26850ad2858.slice - libcontainer container kubepods-besteffort-podca25e48b_50ec_452e_a7dc_d26850ad2858.slice. Dec 13 01:27:43.892088 containerd[1466]: time="2024-12-13T01:27:43.892051684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54ctg,Uid:ca25e48b-50ec-452e-a7dc-d26850ad2858,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:43.961334 containerd[1466]: time="2024-12-13T01:27:43.961260476Z" level=error msg="Failed to destroy network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.961765 containerd[1466]: time="2024-12-13T01:27:43.961732554Z" level=error msg="encountered an error cleaning up failed sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.961831 containerd[1466]: time="2024-12-13T01:27:43.961795011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54ctg,Uid:ca25e48b-50ec-452e-a7dc-d26850ad2858,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.962067 kubelet[2597]: E1213 01:27:43.962039 2597 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:43.962126 kubelet[2597]: E1213 01:27:43.962108 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-54ctg" Dec 13 01:27:43.962155 kubelet[2597]: E1213 01:27:43.962129 2597 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-54ctg" Dec 13 01:27:43.962218 kubelet[2597]: E1213 01:27:43.962192 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-54ctg_calico-system(ca25e48b-50ec-452e-a7dc-d26850ad2858)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-54ctg_calico-system(ca25e48b-50ec-452e-a7dc-d26850ad2858)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-54ctg" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" Dec 13 01:27:43.984455 kubelet[2597]: I1213 01:27:43.984419 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:27:43.985529 containerd[1466]: time="2024-12-13T01:27:43.985078307Z" level=info msg="StopPodSandbox for \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\"" Dec 13 01:27:43.985529 containerd[1466]: time="2024-12-13T01:27:43.985249960Z" level=info msg="Ensure that sandbox 1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4 in task-service has been cleanup successfully" Dec 13 01:27:43.985100 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1-shm.mount: Deactivated successfully. Dec 13 01:27:43.986735 kubelet[2597]: I1213 01:27:43.986167 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:27:43.986785 containerd[1466]: time="2024-12-13T01:27:43.986685959Z" level=info msg="StopPodSandbox for \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\"" Dec 13 01:27:43.986906 containerd[1466]: time="2024-12-13T01:27:43.986873281Z" level=info msg="Ensure that sandbox 46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1 in task-service has been cleanup successfully" Dec 13 01:27:43.987986 kubelet[2597]: I1213 01:27:43.987870 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:27:43.989834 containerd[1466]: time="2024-12-13T01:27:43.988431370Z" level=info msg="StopPodSandbox for \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\"" Dec 13 01:27:43.989834 containerd[1466]: time="2024-12-13T01:27:43.988565061Z" level=info msg="Ensure that sandbox 55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846 in task-service has been cleanup successfully" Dec 13 01:27:43.990458 kubelet[2597]: I1213 01:27:43.990436 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:27:43.991047 containerd[1466]: time="2024-12-13T01:27:43.991011670Z" level=info msg="StopPodSandbox for \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\"" Dec 13 01:27:43.991277 containerd[1466]: time="2024-12-13T01:27:43.991254306Z" level=info msg="Ensure that sandbox 4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71 in task-service has been cleanup successfully" Dec 13 01:27:44.022534 containerd[1466]: time="2024-12-13T01:27:44.022455089Z" level=error msg="StopPodSandbox for \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\" failed" error="failed to destroy network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:44.022866 kubelet[2597]: E1213 01:27:44.022732 2597 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:27:44.022866 kubelet[2597]: E1213 01:27:44.022780 2597 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1"} Dec 13 01:27:44.022866 kubelet[2597]: E1213 01:27:44.022826 2597 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca25e48b-50ec-452e-a7dc-d26850ad2858\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:44.022866 kubelet[2597]: E1213 01:27:44.022858 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca25e48b-50ec-452e-a7dc-d26850ad2858\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-54ctg" podUID="ca25e48b-50ec-452e-a7dc-d26850ad2858" Dec 13 01:27:44.023837 containerd[1466]: time="2024-12-13T01:27:44.023765722Z" level=error msg="StopPodSandbox for \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\" failed" error="failed to destroy network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:44.024084 kubelet[2597]: E1213 01:27:44.024043 2597 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:27:44.024133 kubelet[2597]: E1213 01:27:44.024090 2597 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4"} Dec 13 01:27:44.024133 kubelet[2597]: E1213 01:27:44.024121 2597 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c537e295-f131-421b-b6e3-16e9b31f1282\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:44.024232 kubelet[2597]: E1213 01:27:44.024144 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c537e295-f131-421b-b6e3-16e9b31f1282\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-x4qc6" podUID="c537e295-f131-421b-b6e3-16e9b31f1282" Dec 13 01:27:44.030053 containerd[1466]: time="2024-12-13T01:27:44.030003756Z" level=error msg="StopPodSandbox for \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\" failed" error="failed to destroy network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:44.030244 kubelet[2597]: E1213 01:27:44.030222 2597 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:27:44.030327 kubelet[2597]: E1213 01:27:44.030258 2597 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846"} Dec 13 01:27:44.030327 kubelet[2597]: E1213 01:27:44.030307 2597 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"693e3e7a-b788-4c48-8270-e5f57917bed1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:44.030419 kubelet[2597]: E1213 01:27:44.030333 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"693e3e7a-b788-4c48-8270-e5f57917bed1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb84f74c-7wpxw" podUID="693e3e7a-b788-4c48-8270-e5f57917bed1" Dec 13 01:27:44.034261 containerd[1466]: time="2024-12-13T01:27:44.034205514Z" level=error msg="StopPodSandbox for \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\" failed" error="failed to destroy network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:44.034434 kubelet[2597]: E1213 01:27:44.034397 2597 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:27:44.034434 kubelet[2597]: E1213 01:27:44.034429 2597 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71"} Dec 13 01:27:44.034517 kubelet[2597]: E1213 01:27:44.034459 2597 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fecc8a64-c7e5-403b-881c-5253c8b42a23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:44.034517 kubelet[2597]: E1213 01:27:44.034484 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fecc8a64-c7e5-403b-881c-5253c8b42a23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb84f74c-x8pkp" podUID="fecc8a64-c7e5-403b-881c-5253c8b42a23" Dec 13 01:27:47.472581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783162837.mount: Deactivated successfully. Dec 13 01:27:47.754482 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:47872.service - OpenSSH per-connection server daemon (10.0.0.1:47872). Dec 13 01:27:47.917280 containerd[1466]: time="2024-12-13T01:27:47.917203438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:47.918383 containerd[1466]: time="2024-12-13T01:27:47.918307441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:27:47.920051 containerd[1466]: time="2024-12-13T01:27:47.919992167Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:47.927444 containerd[1466]: time="2024-12-13T01:27:47.926562861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:47.927444 containerd[1466]: time="2024-12-13T01:27:47.927262746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.953099651s" Dec 13 01:27:47.927444 containerd[1466]: time="2024-12-13T01:27:47.927327498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:27:47.943357 containerd[1466]: time="2024-12-13T01:27:47.943268477Z" level=info msg="CreateContainer within sandbox \"9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:27:47.945618 sshd[3830]: Accepted publickey for core from 10.0.0.1 port 47872 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:47.947689 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:47.953466 systemd-logind[1455]: New session 9 of user core. Dec 13 01:27:47.966519 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:27:47.969037 containerd[1466]: time="2024-12-13T01:27:47.966871420Z" level=info msg="CreateContainer within sandbox \"9c23e312c58412895c4662f2af39a6b972231699645d3c16d22af1763660a51f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5f143b9cfb331d6bedbb8ebd3f82eaa4ccdf2cd9063aba0726d98514e496f6fd\"" Dec 13 01:27:47.969037 containerd[1466]: time="2024-12-13T01:27:47.967887489Z" level=info msg="StartContainer for \"5f143b9cfb331d6bedbb8ebd3f82eaa4ccdf2cd9063aba0726d98514e496f6fd\"" Dec 13 01:27:48.045490 systemd[1]: Started cri-containerd-5f143b9cfb331d6bedbb8ebd3f82eaa4ccdf2cd9063aba0726d98514e496f6fd.scope - libcontainer container 5f143b9cfb331d6bedbb8ebd3f82eaa4ccdf2cd9063aba0726d98514e496f6fd. Dec 13 01:27:48.325797 containerd[1466]: time="2024-12-13T01:27:48.325640199Z" level=info msg="StartContainer for \"5f143b9cfb331d6bedbb8ebd3f82eaa4ccdf2cd9063aba0726d98514e496f6fd\" returns successfully" Dec 13 01:27:48.332531 sshd[3830]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:48.337205 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:47872.service: Deactivated successfully. Dec 13 01:27:48.339566 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:27:48.340283 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:27:48.341277 systemd-logind[1455]: Removed session 9. Dec 13 01:27:48.405926 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:27:48.406111 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:27:49.010049 kubelet[2597]: E1213 01:27:49.010017 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:49.142324 kubelet[2597]: I1213 01:27:49.142238 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hmrqg" podStartSLOduration=2.358585963 podStartE2EDuration="18.142187977s" podCreationTimestamp="2024-12-13 01:27:31 +0000 UTC" firstStartedPulling="2024-12-13 01:27:32.144904048 +0000 UTC m=+20.390229153" lastFinishedPulling="2024-12-13 01:27:47.928506072 +0000 UTC m=+36.173831167" observedRunningTime="2024-12-13 01:27:49.133013083 +0000 UTC m=+37.378338178" watchObservedRunningTime="2024-12-13 01:27:49.142187977 +0000 UTC m=+37.387513072" Dec 13 01:27:52.098889 kubelet[2597]: I1213 01:27:52.098756 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:52.099674 kubelet[2597]: E1213 01:27:52.099596 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:53.350703 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:47888.service - OpenSSH per-connection server daemon (10.0.0.1:47888). Dec 13 01:27:53.501280 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 47888 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:53.503125 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:53.507633 systemd-logind[1455]: New session 10 of user core. Dec 13 01:27:53.516487 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:27:53.635785 sshd[4130]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:53.640644 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:47888.service: Deactivated successfully. Dec 13 01:27:53.643230 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:27:53.643858 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:27:53.644904 systemd-logind[1455]: Removed session 10. Dec 13 01:27:54.888466 containerd[1466]: time="2024-12-13T01:27:54.888403164Z" level=info msg="StopPodSandbox for \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\"" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:54.968 [INFO][4191] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:54.970 [INFO][4191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" iface="eth0" netns="/var/run/netns/cni-4c888198-5c99-d13e-7dd1-575deb059056" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:54.970 [INFO][4191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" iface="eth0" netns="/var/run/netns/cni-4c888198-5c99-d13e-7dd1-575deb059056" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:54.971 [INFO][4191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" iface="eth0" netns="/var/run/netns/cni-4c888198-5c99-d13e-7dd1-575deb059056" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:54.971 [INFO][4191] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:54.971 [INFO][4191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:55.018 [INFO][4199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:55.018 [INFO][4199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:55.019 [INFO][4199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:55.057 [WARNING][4199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:55.057 [INFO][4199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:55.059 [INFO][4199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:55.064916 containerd[1466]: 2024-12-13 01:27:55.061 [INFO][4191] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:27:55.065451 containerd[1466]: time="2024-12-13T01:27:55.065078032Z" level=info msg="TearDown network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\" successfully" Dec 13 01:27:55.065451 containerd[1466]: time="2024-12-13T01:27:55.065104622Z" level=info msg="StopPodSandbox for \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\" returns successfully" Dec 13 01:27:55.067937 containerd[1466]: time="2024-12-13T01:27:55.067903366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54ctg,Uid:ca25e48b-50ec-452e-a7dc-d26850ad2858,Namespace:calico-system,Attempt:1,}" Dec 13 01:27:55.068232 systemd[1]: run-netns-cni\x2d4c888198\x2d5c99\x2dd13e\x2d7dd1\x2d575deb059056.mount: Deactivated successfully. Dec 13 01:27:55.429803 systemd-networkd[1410]: calif856d57227b: Link UP Dec 13 01:27:55.430131 systemd-networkd[1410]: calif856d57227b: Gained carrier Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.335 [INFO][4208] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.345 [INFO][4208] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--54ctg-eth0 csi-node-driver- calico-system ca25e48b-50ec-452e-a7dc-d26850ad2858 832 0 2024-12-13 01:27:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-54ctg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif856d57227b [] []}} ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Namespace="calico-system" Pod="csi-node-driver-54ctg" WorkloadEndpoint="localhost-k8s-csi--node--driver--54ctg-" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.346 [INFO][4208] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Namespace="calico-system" Pod="csi-node-driver-54ctg" WorkloadEndpoint="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.381 [INFO][4222] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" HandleID="k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.390 [INFO][4222] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" HandleID="k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-54ctg", "timestamp":"2024-12-13 01:27:55.38138164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.390 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.390 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.390 [INFO][4222] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.392 [INFO][4222] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.398 [INFO][4222] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.402 [INFO][4222] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.403 [INFO][4222] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.405 [INFO][4222] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.405 [INFO][4222] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.407 [INFO][4222] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33 Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.411 [INFO][4222] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.419 [INFO][4222] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.419 [INFO][4222] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" host="localhost" Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.419 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:55.449843 containerd[1466]: 2024-12-13 01:27:55.419 [INFO][4222] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" HandleID="k8s-pod-network.e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.450727 containerd[1466]: 2024-12-13 01:27:55.422 [INFO][4208] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Namespace="calico-system" Pod="csi-node-driver-54ctg" WorkloadEndpoint="localhost-k8s-csi--node--driver--54ctg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--54ctg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca25e48b-50ec-452e-a7dc-d26850ad2858", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-54ctg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif856d57227b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:55.450727 containerd[1466]: 2024-12-13 01:27:55.422 [INFO][4208] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Namespace="calico-system" Pod="csi-node-driver-54ctg" WorkloadEndpoint="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.450727 containerd[1466]: 2024-12-13 01:27:55.422 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif856d57227b ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Namespace="calico-system" Pod="csi-node-driver-54ctg" WorkloadEndpoint="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.450727 containerd[1466]: 2024-12-13 01:27:55.430 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Namespace="calico-system" Pod="csi-node-driver-54ctg" WorkloadEndpoint="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.450727 containerd[1466]: 2024-12-13 01:27:55.430 [INFO][4208] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Namespace="calico-system" Pod="csi-node-driver-54ctg" WorkloadEndpoint="localhost-k8s-csi--node--driver--54ctg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--54ctg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca25e48b-50ec-452e-a7dc-d26850ad2858", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33", Pod:"csi-node-driver-54ctg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif856d57227b", MAC:"3e:76:f1:7b:fe:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:55.450727 containerd[1466]: 2024-12-13 01:27:55.442 [INFO][4208] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33" Namespace="calico-system" Pod="csi-node-driver-54ctg" WorkloadEndpoint="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:27:55.490442 containerd[1466]: time="2024-12-13T01:27:55.490147814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:55.490442 containerd[1466]: time="2024-12-13T01:27:55.490215851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:55.490442 containerd[1466]: time="2024-12-13T01:27:55.490229307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:55.490442 containerd[1466]: time="2024-12-13T01:27:55.490364240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:55.512737 systemd[1]: Started cri-containerd-e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33.scope - libcontainer container e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33. Dec 13 01:27:55.538453 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:55.558692 containerd[1466]: time="2024-12-13T01:27:55.558638455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54ctg,Uid:ca25e48b-50ec-452e-a7dc-d26850ad2858,Namespace:calico-system,Attempt:1,} returns sandbox id \"e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33\"" Dec 13 01:27:55.561034 containerd[1466]: time="2024-12-13T01:27:55.560803460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:27:56.884424 containerd[1466]: time="2024-12-13T01:27:56.884375588Z" level=info msg="StopPodSandbox for \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\"" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.932 [INFO][4349] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.932 [INFO][4349] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" iface="eth0" netns="/var/run/netns/cni-55688326-4b1f-c8e7-28ff-5cdff4a54c7c" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.932 [INFO][4349] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" iface="eth0" netns="/var/run/netns/cni-55688326-4b1f-c8e7-28ff-5cdff4a54c7c" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.932 [INFO][4349] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" iface="eth0" netns="/var/run/netns/cni-55688326-4b1f-c8e7-28ff-5cdff4a54c7c" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.932 [INFO][4349] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.932 [INFO][4349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.951 [INFO][4357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.951 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.951 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.956 [WARNING][4357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.956 [INFO][4357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.957 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:56.962630 containerd[1466]: 2024-12-13 01:27:56.960 [INFO][4349] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:27:56.963575 containerd[1466]: time="2024-12-13T01:27:56.963328853Z" level=info msg="TearDown network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\" successfully" Dec 13 01:27:56.963575 containerd[1466]: time="2024-12-13T01:27:56.963362255Z" level=info msg="StopPodSandbox for \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\" returns successfully" Dec 13 01:27:56.965004 containerd[1466]: time="2024-12-13T01:27:56.964976716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-674bcff85f-qlkvk,Uid:ce304b83-f30f-46db-bfb6-971554b60429,Namespace:calico-system,Attempt:1,}" Dec 13 01:27:56.965854 systemd[1]: run-netns-cni\x2d55688326\x2d4b1f\x2dc8e7\x2d28ff\x2d5cdff4a54c7c.mount: Deactivated successfully. Dec 13 01:27:56.969440 systemd-networkd[1410]: calif856d57227b: Gained IPv6LL Dec 13 01:27:57.269649 systemd-networkd[1410]: calif5fc1e8a1e3: Link UP Dec 13 01:27:57.270410 systemd-networkd[1410]: calif5fc1e8a1e3: Gained carrier Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.132 [INFO][4364] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.141 [INFO][4364] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0 calico-kube-controllers-674bcff85f- calico-system ce304b83-f30f-46db-bfb6-971554b60429 851 0 2024-12-13 01:27:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:674bcff85f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-674bcff85f-qlkvk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif5fc1e8a1e3 [] []}} ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Namespace="calico-system" Pod="calico-kube-controllers-674bcff85f-qlkvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.141 [INFO][4364] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Namespace="calico-system" Pod="calico-kube-controllers-674bcff85f-qlkvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.172 [INFO][4380] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" HandleID="k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.180 [INFO][4380] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" HandleID="k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309820), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-674bcff85f-qlkvk", "timestamp":"2024-12-13 01:27:57.172979667 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.180 [INFO][4380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.180 [INFO][4380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.180 [INFO][4380] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.182 [INFO][4380] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.185 [INFO][4380] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.189 [INFO][4380] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.190 [INFO][4380] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.193 [INFO][4380] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.193 [INFO][4380] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.194 [INFO][4380] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.218 [INFO][4380] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.252 [INFO][4380] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.253 [INFO][4380] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" host="localhost" Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.253 [INFO][4380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:57.340168 containerd[1466]: 2024-12-13 01:27:57.253 [INFO][4380] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" HandleID="k8s-pod-network.4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:57.340789 containerd[1466]: 2024-12-13 01:27:57.262 [INFO][4364] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Namespace="calico-system" Pod="calico-kube-controllers-674bcff85f-qlkvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0", GenerateName:"calico-kube-controllers-674bcff85f-", Namespace:"calico-system", SelfLink:"", UID:"ce304b83-f30f-46db-bfb6-971554b60429", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"674bcff85f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-674bcff85f-qlkvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5fc1e8a1e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:57.340789 containerd[1466]: 2024-12-13 01:27:57.263 [INFO][4364] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Namespace="calico-system" Pod="calico-kube-controllers-674bcff85f-qlkvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:57.340789 containerd[1466]: 2024-12-13 01:27:57.263 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5fc1e8a1e3 ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Namespace="calico-system" Pod="calico-kube-controllers-674bcff85f-qlkvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:57.340789 containerd[1466]: 2024-12-13 01:27:57.269 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Namespace="calico-system" Pod="calico-kube-controllers-674bcff85f-qlkvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:57.340789 containerd[1466]: 2024-12-13 01:27:57.270 [INFO][4364] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Namespace="calico-system" Pod="calico-kube-controllers-674bcff85f-qlkvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0", GenerateName:"calico-kube-controllers-674bcff85f-", Namespace:"calico-system", SelfLink:"", UID:"ce304b83-f30f-46db-bfb6-971554b60429", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"674bcff85f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c", Pod:"calico-kube-controllers-674bcff85f-qlkvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5fc1e8a1e3", MAC:"fa:2c:3b:3b:85:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:57.340789 containerd[1466]: 2024-12-13 01:27:57.337 [INFO][4364] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c" Namespace="calico-system" Pod="calico-kube-controllers-674bcff85f-qlkvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:27:57.587771 containerd[1466]: time="2024-12-13T01:27:57.587533806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:57.587771 containerd[1466]: time="2024-12-13T01:27:57.587634334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:57.587771 containerd[1466]: time="2024-12-13T01:27:57.587646497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:57.588050 containerd[1466]: time="2024-12-13T01:27:57.588004169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:57.617444 systemd[1]: Started cri-containerd-4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c.scope - libcontainer container 4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c. Dec 13 01:27:57.630644 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:57.658364 containerd[1466]: time="2024-12-13T01:27:57.658272020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-674bcff85f-qlkvk,Uid:ce304b83-f30f-46db-bfb6-971554b60429,Namespace:calico-system,Attempt:1,} returns sandbox id \"4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c\"" Dec 13 01:27:57.801285 containerd[1466]: time="2024-12-13T01:27:57.801225902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:57.802120 containerd[1466]: time="2024-12-13T01:27:57.802071810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:27:57.803494 containerd[1466]: time="2024-12-13T01:27:57.803414701Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:57.805755 containerd[1466]: time="2024-12-13T01:27:57.805710660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:57.806348 containerd[1466]: time="2024-12-13T01:27:57.806321958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.245492088s" Dec 13 01:27:57.806402 containerd[1466]: time="2024-12-13T01:27:57.806353297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:27:57.806993 containerd[1466]: time="2024-12-13T01:27:57.806957751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:27:57.808108 containerd[1466]: time="2024-12-13T01:27:57.808075770Z" level=info msg="CreateContainer within sandbox \"e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:27:57.825892 containerd[1466]: time="2024-12-13T01:27:57.825842932Z" level=info msg="CreateContainer within sandbox \"e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"49ddd70594b6790428e5d11283e2eff70546b0b6f3cf5bbb03a04212ad778816\"" Dec 13 01:27:57.826472 containerd[1466]: time="2024-12-13T01:27:57.826426226Z" level=info msg="StartContainer for \"49ddd70594b6790428e5d11283e2eff70546b0b6f3cf5bbb03a04212ad778816\"" Dec 13 01:27:57.880434 systemd[1]: Started cri-containerd-49ddd70594b6790428e5d11283e2eff70546b0b6f3cf5bbb03a04212ad778816.scope - libcontainer container 49ddd70594b6790428e5d11283e2eff70546b0b6f3cf5bbb03a04212ad778816. Dec 13 01:27:57.884993 containerd[1466]: time="2024-12-13T01:27:57.884915988Z" level=info msg="StopPodSandbox for \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\"" Dec 13 01:27:57.889323 containerd[1466]: time="2024-12-13T01:27:57.887879221Z" level=info msg="StopPodSandbox for \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\"" Dec 13 01:27:57.925890 containerd[1466]: time="2024-12-13T01:27:57.925844687Z" level=info msg="StartContainer for \"49ddd70594b6790428e5d11283e2eff70546b0b6f3cf5bbb03a04212ad778816\" returns successfully" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.940 [INFO][4525] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.941 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" iface="eth0" netns="/var/run/netns/cni-13ffdefd-734c-6792-a227-88ace9b2d483" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.941 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" iface="eth0" netns="/var/run/netns/cni-13ffdefd-734c-6792-a227-88ace9b2d483" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.941 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" iface="eth0" netns="/var/run/netns/cni-13ffdefd-734c-6792-a227-88ace9b2d483" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.941 [INFO][4525] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.941 [INFO][4525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.972 [INFO][4552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.973 [INFO][4552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.973 [INFO][4552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.978 [WARNING][4552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.978 [INFO][4552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.979 [INFO][4552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:57.985793 containerd[1466]: 2024-12-13 01:27:57.983 [INFO][4525] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:27:57.986361 containerd[1466]: time="2024-12-13T01:27:57.986016096Z" level=info msg="TearDown network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\" successfully" Dec 13 01:27:57.986361 containerd[1466]: time="2024-12-13T01:27:57.986058265Z" level=info msg="StopPodSandbox for \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\" returns successfully" Dec 13 01:27:57.986675 kubelet[2597]: E1213 01:27:57.986648 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:57.987091 containerd[1466]: time="2024-12-13T01:27:57.987051630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mfv2r,Uid:9f955d26-c47f-4a21-b33a-e3a989a3e532,Namespace:kube-system,Attempt:1,}" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.946 [INFO][4526] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.947 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" iface="eth0" netns="/var/run/netns/cni-7452630e-05e5-5c80-a3ee-723a9dfb7a2d" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.947 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" iface="eth0" netns="/var/run/netns/cni-7452630e-05e5-5c80-a3ee-723a9dfb7a2d" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.947 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" iface="eth0" netns="/var/run/netns/cni-7452630e-05e5-5c80-a3ee-723a9dfb7a2d" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.947 [INFO][4526] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.947 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.976 [INFO][4557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.976 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.979 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.983 [WARNING][4557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.983 [INFO][4557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.984 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:57.989999 containerd[1466]: 2024-12-13 01:27:57.987 [INFO][4526] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:27:57.990509 containerd[1466]: time="2024-12-13T01:27:57.990179381Z" level=info msg="TearDown network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\" successfully" Dec 13 01:27:57.990509 containerd[1466]: time="2024-12-13T01:27:57.990210649Z" level=info msg="StopPodSandbox for \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\" returns successfully" Dec 13 01:27:57.990819 containerd[1466]: time="2024-12-13T01:27:57.990788464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb84f74c-7wpxw,Uid:693e3e7a-b788-4c48-8270-e5f57917bed1,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:27:58.117244 systemd[1]: run-netns-cni\x2d7452630e\x2d05e5\x2d5c80\x2da3ee\x2d723a9dfb7a2d.mount: Deactivated successfully. Dec 13 01:27:58.118054 systemd[1]: run-netns-cni\x2d13ffdefd\x2d734c\x2d6792\x2da227\x2d88ace9b2d483.mount: Deactivated successfully. Dec 13 01:27:58.121446 systemd-networkd[1410]: cali8ab73e61146: Link UP Dec 13 01:27:58.121764 systemd-networkd[1410]: cali8ab73e61146: Gained carrier Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.024 [INFO][4568] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.036 [INFO][4568] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--mfv2r-eth0 coredns-76f75df574- kube-system 9f955d26-c47f-4a21-b33a-e3a989a3e532 864 0 2024-12-13 01:27:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-mfv2r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ab73e61146 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Namespace="kube-system" Pod="coredns-76f75df574-mfv2r" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mfv2r-" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.036 [INFO][4568] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Namespace="kube-system" Pod="coredns-76f75df574-mfv2r" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.073 [INFO][4596] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" HandleID="k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.083 [INFO][4596] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" HandleID="k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004cfb70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-mfv2r", "timestamp":"2024-12-13 01:27:58.073755926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.083 [INFO][4596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.083 [INFO][4596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.083 [INFO][4596] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.085 [INFO][4596] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.090 [INFO][4596] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.096 [INFO][4596] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.097 [INFO][4596] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.099 [INFO][4596] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.099 [INFO][4596] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.100 [INFO][4596] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1 Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.104 [INFO][4596] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.110 [INFO][4596] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.112 [INFO][4596] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" host="localhost" Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.112 [INFO][4596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:58.140620 containerd[1466]: 2024-12-13 01:27:58.112 [INFO][4596] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" HandleID="k8s-pod-network.139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:58.141561 containerd[1466]: 2024-12-13 01:27:58.115 [INFO][4568] cni-plugin/k8s.go 386: Populated endpoint ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Namespace="kube-system" Pod="coredns-76f75df574-mfv2r" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mfv2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mfv2r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9f955d26-c47f-4a21-b33a-e3a989a3e532", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-mfv2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ab73e61146", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:58.141561 containerd[1466]: 2024-12-13 01:27:58.115 [INFO][4568] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Namespace="kube-system" Pod="coredns-76f75df574-mfv2r" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:58.141561 containerd[1466]: 2024-12-13 01:27:58.115 [INFO][4568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ab73e61146 ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Namespace="kube-system" Pod="coredns-76f75df574-mfv2r" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:58.141561 containerd[1466]: 2024-12-13 01:27:58.121 [INFO][4568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Namespace="kube-system" Pod="coredns-76f75df574-mfv2r" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:58.141561 containerd[1466]: 2024-12-13 01:27:58.123 [INFO][4568] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Namespace="kube-system" Pod="coredns-76f75df574-mfv2r" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mfv2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mfv2r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9f955d26-c47f-4a21-b33a-e3a989a3e532", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1", Pod:"coredns-76f75df574-mfv2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ab73e61146", MAC:"da:d3:aa:a5:ac:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:58.141561 containerd[1466]: 2024-12-13 01:27:58.138 [INFO][4568] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1" Namespace="kube-system" Pod="coredns-76f75df574-mfv2r" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:27:58.155969 systemd-networkd[1410]: cali2486f718283: Link UP Dec 13 01:27:58.156157 systemd-networkd[1410]: cali2486f718283: Gained carrier Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.031 [INFO][4578] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.048 [INFO][4578] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0 calico-apiserver-7bb84f74c- calico-apiserver 693e3e7a-b788-4c48-8270-e5f57917bed1 865 0 2024-12-13 01:27:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb84f74c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bb84f74c-7wpxw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2486f718283 [] []}} ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-7wpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.048 [INFO][4578] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-7wpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.082 [INFO][4601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" HandleID="k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.095 [INFO][4601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" HandleID="k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011de30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bb84f74c-7wpxw", "timestamp":"2024-12-13 01:27:58.082221616 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.095 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.113 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.114 [INFO][4601] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.119 [INFO][4601] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.125 [INFO][4601] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.131 [INFO][4601] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.132 [INFO][4601] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.134 [INFO][4601] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.134 [INFO][4601] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.136 [INFO][4601] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349 Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.140 [INFO][4601] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.146 [INFO][4601] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.146 [INFO][4601] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" host="localhost" Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.146 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:58.168304 containerd[1466]: 2024-12-13 01:27:58.147 [INFO][4601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" HandleID="k8s-pod-network.12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:58.169022 containerd[1466]: 2024-12-13 01:27:58.150 [INFO][4578] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-7wpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0", GenerateName:"calico-apiserver-7bb84f74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"693e3e7a-b788-4c48-8270-e5f57917bed1", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb84f74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bb84f74c-7wpxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2486f718283", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:58.169022 containerd[1466]: 2024-12-13 01:27:58.151 [INFO][4578] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-7wpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:58.169022 containerd[1466]: 2024-12-13 01:27:58.151 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2486f718283 ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-7wpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:58.169022 containerd[1466]: 2024-12-13 01:27:58.154 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-7wpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:58.169022 containerd[1466]: 2024-12-13 01:27:58.154 [INFO][4578] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-7wpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0", GenerateName:"calico-apiserver-7bb84f74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"693e3e7a-b788-4c48-8270-e5f57917bed1", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb84f74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349", Pod:"calico-apiserver-7bb84f74c-7wpxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2486f718283", MAC:"86:1d:3a:05:65:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:58.169022 containerd[1466]: 2024-12-13 01:27:58.164 [INFO][4578] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-7wpxw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:27:58.170349 containerd[1466]: time="2024-12-13T01:27:58.169541427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:58.170468 containerd[1466]: time="2024-12-13T01:27:58.170415247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:58.170468 containerd[1466]: time="2024-12-13T01:27:58.170440365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.170685 containerd[1466]: time="2024-12-13T01:27:58.170633657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.196224 containerd[1466]: time="2024-12-13T01:27:58.195926622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:58.196224 containerd[1466]: time="2024-12-13T01:27:58.196026750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:58.196224 containerd[1466]: time="2024-12-13T01:27:58.196065743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.196224 containerd[1466]: time="2024-12-13T01:27:58.196157315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:58.199520 systemd[1]: Started cri-containerd-139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1.scope - libcontainer container 139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1. Dec 13 01:27:58.221628 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:58.229505 systemd[1]: Started cri-containerd-12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349.scope - libcontainer container 12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349. Dec 13 01:27:58.249531 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:58.252966 containerd[1466]: time="2024-12-13T01:27:58.252914409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mfv2r,Uid:9f955d26-c47f-4a21-b33a-e3a989a3e532,Namespace:kube-system,Attempt:1,} returns sandbox id \"139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1\"" Dec 13 01:27:58.255203 kubelet[2597]: E1213 01:27:58.253804 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:58.257387 containerd[1466]: time="2024-12-13T01:27:58.257352820Z" level=info msg="CreateContainer within sandbox \"139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:27:58.278009 containerd[1466]: time="2024-12-13T01:27:58.277948689Z" level=info msg="CreateContainer within sandbox \"139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fdcec1e7e3603db29d4c3172f4783c206be9e1752b5598507eee623a63c9aa2\"" Dec 13 01:27:58.279034 containerd[1466]: time="2024-12-13T01:27:58.278990795Z" level=info msg="StartContainer for \"6fdcec1e7e3603db29d4c3172f4783c206be9e1752b5598507eee623a63c9aa2\"" Dec 13 01:27:58.280767 containerd[1466]: time="2024-12-13T01:27:58.280728225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb84f74c-7wpxw,Uid:693e3e7a-b788-4c48-8270-e5f57917bed1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349\"" Dec 13 01:27:58.307462 systemd[1]: Started cri-containerd-6fdcec1e7e3603db29d4c3172f4783c206be9e1752b5598507eee623a63c9aa2.scope - libcontainer container 6fdcec1e7e3603db29d4c3172f4783c206be9e1752b5598507eee623a63c9aa2. Dec 13 01:27:58.313571 systemd-networkd[1410]: calif5fc1e8a1e3: Gained IPv6LL Dec 13 01:27:58.338365 containerd[1466]: time="2024-12-13T01:27:58.338322000Z" level=info msg="StartContainer for \"6fdcec1e7e3603db29d4c3172f4783c206be9e1752b5598507eee623a63c9aa2\" returns successfully" Dec 13 01:27:58.648539 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:39430.service - OpenSSH per-connection server daemon (10.0.0.1:39430). Dec 13 01:27:58.734868 sshd[4751]: Accepted publickey for core from 10.0.0.1 port 39430 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:58.735501 sshd[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:58.745836 systemd-logind[1455]: New session 11 of user core. Dec 13 01:27:58.751720 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:27:58.884161 containerd[1466]: time="2024-12-13T01:27:58.884108688Z" level=info msg="StopPodSandbox for \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\"" Dec 13 01:27:58.884280 containerd[1466]: time="2024-12-13T01:27:58.884180844Z" level=info msg="StopPodSandbox for \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\"" Dec 13 01:27:58.893682 sshd[4751]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:58.906253 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:39430.service: Deactivated successfully. Dec 13 01:27:58.910721 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:27:58.912872 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:27:58.916166 systemd-logind[1455]: Removed session 11. Dec 13 01:27:58.925896 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:39432.service - OpenSSH per-connection server daemon (10.0.0.1:39432). Dec 13 01:27:58.997982 sshd[4833]: Accepted publickey for core from 10.0.0.1 port 39432 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:58.999865 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:59.005846 systemd-logind[1455]: New session 12 of user core. Dec 13 01:27:59.015489 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:58.985 [INFO][4818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:58.985 [INFO][4818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" iface="eth0" netns="/var/run/netns/cni-cc30179d-3c89-68a0-8b02-3a11c5570546" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:58.985 [INFO][4818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" iface="eth0" netns="/var/run/netns/cni-cc30179d-3c89-68a0-8b02-3a11c5570546" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:58.985 [INFO][4818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" iface="eth0" netns="/var/run/netns/cni-cc30179d-3c89-68a0-8b02-3a11c5570546" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:58.985 [INFO][4818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:58.985 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:59.012 [INFO][4838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:59.012 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:59.012 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:59.019 [WARNING][4838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:59.019 [INFO][4838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:59.021 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:59.027106 containerd[1466]: 2024-12-13 01:27:59.024 [INFO][4818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:27:59.028899 containerd[1466]: time="2024-12-13T01:27:59.027334011Z" level=info msg="TearDown network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\" successfully" Dec 13 01:27:59.028899 containerd[1466]: time="2024-12-13T01:27:59.027364949Z" level=info msg="StopPodSandbox for \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\" returns successfully" Dec 13 01:27:59.028899 containerd[1466]: time="2024-12-13T01:27:59.028121830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x4qc6,Uid:c537e295-f131-421b-b6e3-16e9b31f1282,Namespace:kube-system,Attempt:1,}" Dec 13 01:27:59.028973 kubelet[2597]: E1213 01:27:59.027755 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.010 [INFO][4820] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.010 [INFO][4820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" iface="eth0" netns="/var/run/netns/cni-81a8d1b4-c6d1-9f39-9615-29d4f8608073" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.011 [INFO][4820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" iface="eth0" netns="/var/run/netns/cni-81a8d1b4-c6d1-9f39-9615-29d4f8608073" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.011 [INFO][4820] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" iface="eth0" netns="/var/run/netns/cni-81a8d1b4-c6d1-9f39-9615-29d4f8608073" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.011 [INFO][4820] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.011 [INFO][4820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.034 [INFO][4845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.035 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.035 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.040 [WARNING][4845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.040 [INFO][4845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.042 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:59.048413 containerd[1466]: 2024-12-13 01:27:59.045 [INFO][4820] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:27:59.048836 containerd[1466]: time="2024-12-13T01:27:59.048666350Z" level=info msg="TearDown network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\" successfully" Dec 13 01:27:59.048836 containerd[1466]: time="2024-12-13T01:27:59.048736472Z" level=info msg="StopPodSandbox for \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\" returns successfully" Dec 13 01:27:59.049851 containerd[1466]: time="2024-12-13T01:27:59.049809797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb84f74c-x8pkp,Uid:fecc8a64-c7e5-403b-881c-5253c8b42a23,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:27:59.065981 kubelet[2597]: E1213 01:27:59.065889 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:59.103168 kubelet[2597]: I1213 01:27:59.103116 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mfv2r" podStartSLOduration=34.103070753 podStartE2EDuration="34.103070753s" podCreationTimestamp="2024-12-13 01:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:59.083771119 +0000 UTC m=+47.329096214" watchObservedRunningTime="2024-12-13 01:27:59.103070753 +0000 UTC m=+47.348395848" Dec 13 01:27:59.119821 systemd[1]: run-netns-cni\x2d81a8d1b4\x2dc6d1\x2d9f39\x2d9615\x2d29d4f8608073.mount: Deactivated successfully. Dec 13 01:27:59.119946 systemd[1]: run-netns-cni\x2dcc30179d\x2d3c89\x2d68a0\x2d8b02\x2d3a11c5570546.mount: Deactivated successfully. Dec 13 01:27:59.225639 systemd-networkd[1410]: cali473b359572b: Link UP Dec 13 01:27:59.225963 systemd-networkd[1410]: cali473b359572b: Gained carrier Dec 13 01:27:59.229121 sshd[4833]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.065 [INFO][4855] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.080 [INFO][4855] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--x4qc6-eth0 coredns-76f75df574- kube-system c537e295-f131-421b-b6e3-16e9b31f1282 885 0 2024-12-13 01:27:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-x4qc6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali473b359572b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Namespace="kube-system" Pod="coredns-76f75df574-x4qc6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x4qc6-" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.080 [INFO][4855] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Namespace="kube-system" Pod="coredns-76f75df574-x4qc6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.145 [INFO][4884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" HandleID="k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.159 [INFO][4884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" HandleID="k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-x4qc6", "timestamp":"2024-12-13 01:27:59.144878592 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.159 [INFO][4884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.159 [INFO][4884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.159 [INFO][4884] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.163 [INFO][4884] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.175 [INFO][4884] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.187 [INFO][4884] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.189 [INFO][4884] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.191 [INFO][4884] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.191 [INFO][4884] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.193 [INFO][4884] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679 Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.199 [INFO][4884] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.205 [INFO][4884] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.205 [INFO][4884] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" host="localhost" Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.205 [INFO][4884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:59.246580 containerd[1466]: 2024-12-13 01:27:59.205 [INFO][4884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" HandleID="k8s-pod-network.578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.242666 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:39432.service: Deactivated successfully. Dec 13 01:27:59.248402 containerd[1466]: 2024-12-13 01:27:59.214 [INFO][4855] cni-plugin/k8s.go 386: Populated endpoint ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Namespace="kube-system" Pod="coredns-76f75df574-x4qc6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x4qc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--x4qc6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c537e295-f131-421b-b6e3-16e9b31f1282", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-x4qc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali473b359572b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:59.248402 containerd[1466]: 2024-12-13 01:27:59.214 [INFO][4855] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Namespace="kube-system" Pod="coredns-76f75df574-x4qc6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.248402 containerd[1466]: 2024-12-13 01:27:59.214 [INFO][4855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali473b359572b ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Namespace="kube-system" Pod="coredns-76f75df574-x4qc6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.248402 containerd[1466]: 2024-12-13 01:27:59.224 [INFO][4855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Namespace="kube-system" Pod="coredns-76f75df574-x4qc6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.248402 containerd[1466]: 2024-12-13 01:27:59.225 [INFO][4855] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Namespace="kube-system" Pod="coredns-76f75df574-x4qc6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x4qc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--x4qc6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c537e295-f131-421b-b6e3-16e9b31f1282", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679", Pod:"coredns-76f75df574-x4qc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali473b359572b", MAC:"56:78:9e:8a:d0:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:59.248402 containerd[1466]: 2024-12-13 01:27:59.234 [INFO][4855] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679" Namespace="kube-system" Pod="coredns-76f75df574-x4qc6" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:27:59.251250 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:27:59.256848 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:27:59.265647 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:39446.service - OpenSSH per-connection server daemon (10.0.0.1:39446). Dec 13 01:27:59.266401 systemd-logind[1455]: Removed session 12. Dec 13 01:27:59.269170 systemd-networkd[1410]: cali129d8356049: Link UP Dec 13 01:27:59.269469 systemd-networkd[1410]: cali129d8356049: Gained carrier Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.122 [INFO][4880] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.135 [INFO][4880] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0 calico-apiserver-7bb84f74c- calico-apiserver fecc8a64-c7e5-403b-881c-5253c8b42a23 886 0 2024-12-13 01:27:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb84f74c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bb84f74c-x8pkp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali129d8356049 [] []}} ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-x8pkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.135 [INFO][4880] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-x8pkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.180 [INFO][4900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" HandleID="k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.189 [INFO][4900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" HandleID="k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003676d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bb84f74c-x8pkp", "timestamp":"2024-12-13 01:27:59.180087295 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.190 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.205 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.205 [INFO][4900] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.208 [INFO][4900] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.215 [INFO][4900] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.221 [INFO][4900] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.223 [INFO][4900] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.230 [INFO][4900] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.230 [INFO][4900] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.236 [INFO][4900] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14 Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.256 [INFO][4900] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.263 [INFO][4900] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.263 [INFO][4900] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" host="localhost" Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.263 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:27:59.313109 containerd[1466]: 2024-12-13 01:27:59.263 [INFO][4900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" HandleID="k8s-pod-network.7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.313945 containerd[1466]: 2024-12-13 01:27:59.266 [INFO][4880] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-x8pkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0", GenerateName:"calico-apiserver-7bb84f74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecc8a64-c7e5-403b-881c-5253c8b42a23", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb84f74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bb84f74c-x8pkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali129d8356049", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:59.313945 containerd[1466]: 2024-12-13 01:27:59.266 [INFO][4880] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-x8pkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.313945 containerd[1466]: 2024-12-13 01:27:59.266 [INFO][4880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali129d8356049 ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-x8pkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.313945 containerd[1466]: 2024-12-13 01:27:59.269 [INFO][4880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-x8pkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.313945 containerd[1466]: 2024-12-13 01:27:59.270 [INFO][4880] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-x8pkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0", GenerateName:"calico-apiserver-7bb84f74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecc8a64-c7e5-403b-881c-5253c8b42a23", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb84f74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14", Pod:"calico-apiserver-7bb84f74c-x8pkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali129d8356049", MAC:"76:0f:5e:f0:ce:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:27:59.313945 containerd[1466]: 2024-12-13 01:27:59.287 [INFO][4880] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14" Namespace="calico-apiserver" Pod="calico-apiserver-7bb84f74c-x8pkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:27:59.313945 containerd[1466]: time="2024-12-13T01:27:59.312163230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:59.313945 containerd[1466]: time="2024-12-13T01:27:59.312249141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:59.313945 containerd[1466]: time="2024-12-13T01:27:59.312280280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:59.313945 containerd[1466]: time="2024-12-13T01:27:59.312484923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:59.359582 systemd[1]: Started cri-containerd-578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679.scope - libcontainer container 578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679. Dec 13 01:27:59.377183 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:59.409941 sshd[4926]: Accepted publickey for core from 10.0.0.1 port 39446 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:59.410618 containerd[1466]: time="2024-12-13T01:27:59.410164589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x4qc6,Uid:c537e295-f131-421b-b6e3-16e9b31f1282,Namespace:kube-system,Attempt:1,} returns sandbox id \"578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679\"" Dec 13 01:27:59.411272 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:59.411818 kubelet[2597]: E1213 01:27:59.411785 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:59.416641 systemd-logind[1455]: New session 13 of user core. Dec 13 01:27:59.420683 containerd[1466]: time="2024-12-13T01:27:59.418345464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:59.420683 containerd[1466]: time="2024-12-13T01:27:59.418437447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:59.420683 containerd[1466]: time="2024-12-13T01:27:59.418504773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:59.420683 containerd[1466]: time="2024-12-13T01:27:59.419644643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:59.421948 containerd[1466]: time="2024-12-13T01:27:59.421892842Z" level=info msg="CreateContainer within sandbox \"578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:27:59.423453 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:27:59.449521 systemd[1]: Started cri-containerd-7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14.scope - libcontainer container 7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14. Dec 13 01:27:59.467905 systemd-networkd[1410]: cali8ab73e61146: Gained IPv6LL Dec 13 01:27:59.481609 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:27:59.509239 containerd[1466]: time="2024-12-13T01:27:59.509187867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb84f74c-x8pkp,Uid:fecc8a64-c7e5-403b-881c-5253c8b42a23,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14\"" Dec 13 01:27:59.540696 kubelet[2597]: I1213 01:27:59.540640 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:59.541470 kubelet[2597]: E1213 01:27:59.541452 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:59.562351 containerd[1466]: time="2024-12-13T01:27:59.561990392Z" level=info msg="CreateContainer within sandbox \"578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a598df8bd0ab94db90db650f90e27d4c74083374fd45fbf11d808c8ea409f6c\"" Dec 13 01:27:59.564589 containerd[1466]: time="2024-12-13T01:27:59.564504692Z" level=info msg="StartContainer for \"8a598df8bd0ab94db90db650f90e27d4c74083374fd45fbf11d808c8ea409f6c\"" Dec 13 01:27:59.619562 systemd[1]: Started cri-containerd-8a598df8bd0ab94db90db650f90e27d4c74083374fd45fbf11d808c8ea409f6c.scope - libcontainer container 8a598df8bd0ab94db90db650f90e27d4c74083374fd45fbf11d808c8ea409f6c. Dec 13 01:27:59.668077 containerd[1466]: time="2024-12-13T01:27:59.668023567Z" level=info msg="StartContainer for \"8a598df8bd0ab94db90db650f90e27d4c74083374fd45fbf11d808c8ea409f6c\" returns successfully" Dec 13 01:27:59.705661 sshd[4926]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:59.709802 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:39446.service: Deactivated successfully. Dec 13 01:27:59.713722 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:27:59.716762 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:27:59.721018 systemd-logind[1455]: Removed session 13. Dec 13 01:27:59.780344 kernel: bpftool[5094]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:28:00.041504 systemd-networkd[1410]: cali2486f718283: Gained IPv6LL Dec 13 01:28:00.070707 kubelet[2597]: E1213 01:28:00.070660 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:00.074956 kubelet[2597]: E1213 01:28:00.074902 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:00.075367 kubelet[2597]: E1213 01:28:00.075193 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:00.177706 systemd-networkd[1410]: vxlan.calico: Link UP Dec 13 01:28:00.177719 systemd-networkd[1410]: vxlan.calico: Gained carrier Dec 13 01:28:00.294075 kubelet[2597]: I1213 01:28:00.293137 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-x4qc6" podStartSLOduration=35.293080774 podStartE2EDuration="35.293080774s" podCreationTimestamp="2024-12-13 01:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:00.292975556 +0000 UTC m=+48.538300662" watchObservedRunningTime="2024-12-13 01:28:00.293080774 +0000 UTC m=+48.538405869" Dec 13 01:28:00.745442 systemd-networkd[1410]: cali473b359572b: Gained IPv6LL Dec 13 01:28:00.846541 containerd[1466]: time="2024-12-13T01:28:00.846484819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:00.847797 containerd[1466]: time="2024-12-13T01:28:00.847763329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:28:00.849516 containerd[1466]: time="2024-12-13T01:28:00.849491953Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:00.854001 containerd[1466]: time="2024-12-13T01:28:00.853931425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.046917599s" Dec 13 01:28:00.854001 containerd[1466]: time="2024-12-13T01:28:00.854002358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:28:00.854171 containerd[1466]: time="2024-12-13T01:28:00.854135318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:00.858323 containerd[1466]: time="2024-12-13T01:28:00.856502461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:28:00.863659 containerd[1466]: time="2024-12-13T01:28:00.863600052Z" level=info msg="CreateContainer within sandbox \"4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:28:00.879924 containerd[1466]: time="2024-12-13T01:28:00.879883813Z" level=info msg="CreateContainer within sandbox \"4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1286c737466874fe92b0c2fc77e1eb05ee27b7f0fbd408a67ff5de6af901beb3\"" Dec 13 01:28:00.881076 containerd[1466]: time="2024-12-13T01:28:00.880999196Z" level=info msg="StartContainer for \"1286c737466874fe92b0c2fc77e1eb05ee27b7f0fbd408a67ff5de6af901beb3\"" Dec 13 01:28:00.911482 systemd[1]: Started cri-containerd-1286c737466874fe92b0c2fc77e1eb05ee27b7f0fbd408a67ff5de6af901beb3.scope - libcontainer container 1286c737466874fe92b0c2fc77e1eb05ee27b7f0fbd408a67ff5de6af901beb3. Dec 13 01:28:01.002461 systemd-networkd[1410]: cali129d8356049: Gained IPv6LL Dec 13 01:28:01.138356 containerd[1466]: time="2024-12-13T01:28:01.138228395Z" level=info msg="StartContainer for \"1286c737466874fe92b0c2fc77e1eb05ee27b7f0fbd408a67ff5de6af901beb3\" returns successfully" Dec 13 01:28:01.142218 kubelet[2597]: E1213 01:28:01.142086 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:01.142218 kubelet[2597]: E1213 01:28:01.142190 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:01.513513 systemd-networkd[1410]: vxlan.calico: Gained IPv6LL Dec 13 01:28:02.144531 kubelet[2597]: E1213 01:28:02.144482 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:02.168306 systemd[1]: run-containerd-runc-k8s.io-1286c737466874fe92b0c2fc77e1eb05ee27b7f0fbd408a67ff5de6af901beb3-runc.K1xfw0.mount: Deactivated successfully. Dec 13 01:28:02.337127 kubelet[2597]: I1213 01:28:02.337071 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-674bcff85f-qlkvk" podStartSLOduration=28.142410337 podStartE2EDuration="31.3370292s" podCreationTimestamp="2024-12-13 01:27:31 +0000 UTC" firstStartedPulling="2024-12-13 01:27:57.659857616 +0000 UTC m=+45.905182711" lastFinishedPulling="2024-12-13 01:28:00.854476469 +0000 UTC m=+49.099801574" observedRunningTime="2024-12-13 01:28:01.161108033 +0000 UTC m=+49.406433128" watchObservedRunningTime="2024-12-13 01:28:02.3370292 +0000 UTC m=+50.582354295" Dec 13 01:28:04.250700 containerd[1466]: time="2024-12-13T01:28:04.250644611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:04.292984 containerd[1466]: time="2024-12-13T01:28:04.292889913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:28:04.334597 containerd[1466]: time="2024-12-13T01:28:04.334529258Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:04.361839 containerd[1466]: time="2024-12-13T01:28:04.361763744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:04.362559 containerd[1466]: time="2024-12-13T01:28:04.362508171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.505962168s" Dec 13 01:28:04.362605 containerd[1466]: time="2024-12-13T01:28:04.362561811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:28:04.363420 containerd[1466]: time="2024-12-13T01:28:04.363266423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:04.365003 containerd[1466]: time="2024-12-13T01:28:04.364945343Z" level=info msg="CreateContainer within sandbox \"e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:28:04.715427 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:39452.service - OpenSSH per-connection server daemon (10.0.0.1:39452). Dec 13 01:28:04.769976 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 39452 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:04.771933 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:04.775843 systemd-logind[1455]: New session 14 of user core. Dec 13 01:28:04.786431 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:28:04.921583 containerd[1466]: time="2024-12-13T01:28:04.921523803Z" level=info msg="CreateContainer within sandbox \"e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a230d467995489e8b82084ba88e30e2034ad920d20f1e0027285578410717e43\"" Dec 13 01:28:04.922501 containerd[1466]: time="2024-12-13T01:28:04.922467103Z" level=info msg="StartContainer for \"a230d467995489e8b82084ba88e30e2034ad920d20f1e0027285578410717e43\"" Dec 13 01:28:04.960599 systemd[1]: Started cri-containerd-a230d467995489e8b82084ba88e30e2034ad920d20f1e0027285578410717e43.scope - libcontainer container a230d467995489e8b82084ba88e30e2034ad920d20f1e0027285578410717e43. Dec 13 01:28:05.037607 sshd[5305]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:05.042822 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:39452.service: Deactivated successfully. Dec 13 01:28:05.045346 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:28:05.046224 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:28:05.047329 systemd-logind[1455]: Removed session 14. Dec 13 01:28:05.054380 containerd[1466]: time="2024-12-13T01:28:05.054333324Z" level=info msg="StartContainer for \"a230d467995489e8b82084ba88e30e2034ad920d20f1e0027285578410717e43\" returns successfully" Dec 13 01:28:05.222321 kubelet[2597]: I1213 01:28:05.222246 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-54ctg" podStartSLOduration=25.419414571 podStartE2EDuration="34.222198157s" podCreationTimestamp="2024-12-13 01:27:31 +0000 UTC" firstStartedPulling="2024-12-13 01:27:55.56020162 +0000 UTC m=+43.805526715" lastFinishedPulling="2024-12-13 01:28:04.362985205 +0000 UTC m=+52.608310301" observedRunningTime="2024-12-13 01:28:05.222139968 +0000 UTC m=+53.467465063" watchObservedRunningTime="2024-12-13 01:28:05.222198157 +0000 UTC m=+53.467523252" Dec 13 01:28:05.975561 kubelet[2597]: I1213 01:28:05.975516 2597 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:28:05.976684 kubelet[2597]: I1213 01:28:05.976662 2597 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:28:07.151560 containerd[1466]: time="2024-12-13T01:28:07.151498441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:07.152234 containerd[1466]: time="2024-12-13T01:28:07.152195279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:28:07.153543 containerd[1466]: time="2024-12-13T01:28:07.153504735Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:07.155659 containerd[1466]: time="2024-12-13T01:28:07.155615255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:07.156194 containerd[1466]: time="2024-12-13T01:28:07.156164216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.792805048s" Dec 13 01:28:07.156255 containerd[1466]: time="2024-12-13T01:28:07.156196346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:28:07.157323 containerd[1466]: time="2024-12-13T01:28:07.157273858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:07.157836 containerd[1466]: time="2024-12-13T01:28:07.157804533Z" level=info msg="CreateContainer within sandbox \"12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:07.170238 containerd[1466]: time="2024-12-13T01:28:07.170114691Z" level=info msg="CreateContainer within sandbox \"12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0cb4efef44e5ea20c6faf48b45e3d9bccca6e37f66bb0d591c4a0005ad03cd66\"" Dec 13 01:28:07.172734 containerd[1466]: time="2024-12-13T01:28:07.172703227Z" level=info msg="StartContainer for \"0cb4efef44e5ea20c6faf48b45e3d9bccca6e37f66bb0d591c4a0005ad03cd66\"" Dec 13 01:28:07.202952 systemd[1]: run-containerd-runc-k8s.io-0cb4efef44e5ea20c6faf48b45e3d9bccca6e37f66bb0d591c4a0005ad03cd66-runc.1fjkcs.mount: Deactivated successfully. Dec 13 01:28:07.217441 systemd[1]: Started cri-containerd-0cb4efef44e5ea20c6faf48b45e3d9bccca6e37f66bb0d591c4a0005ad03cd66.scope - libcontainer container 0cb4efef44e5ea20c6faf48b45e3d9bccca6e37f66bb0d591c4a0005ad03cd66. Dec 13 01:28:07.263060 containerd[1466]: time="2024-12-13T01:28:07.263001543Z" level=info msg="StartContainer for \"0cb4efef44e5ea20c6faf48b45e3d9bccca6e37f66bb0d591c4a0005ad03cd66\" returns successfully" Dec 13 01:28:07.519796 containerd[1466]: time="2024-12-13T01:28:07.519728851Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:07.521363 containerd[1466]: time="2024-12-13T01:28:07.520600422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:28:07.523548 containerd[1466]: time="2024-12-13T01:28:07.523472250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 366.150372ms" Dec 13 01:28:07.523548 containerd[1466]: time="2024-12-13T01:28:07.523526553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:28:07.526887 containerd[1466]: time="2024-12-13T01:28:07.526852816Z" level=info msg="CreateContainer within sandbox \"7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:07.541837 containerd[1466]: time="2024-12-13T01:28:07.541758466Z" level=info msg="CreateContainer within sandbox \"7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"08e46cbb15d9cb529578d4cafe70758366e8294d7ead472d3b9a80e39be085c6\"" Dec 13 01:28:07.544057 containerd[1466]: time="2024-12-13T01:28:07.542708604Z" level=info msg="StartContainer for \"08e46cbb15d9cb529578d4cafe70758366e8294d7ead472d3b9a80e39be085c6\"" Dec 13 01:28:07.577552 systemd[1]: Started cri-containerd-08e46cbb15d9cb529578d4cafe70758366e8294d7ead472d3b9a80e39be085c6.scope - libcontainer container 08e46cbb15d9cb529578d4cafe70758366e8294d7ead472d3b9a80e39be085c6. Dec 13 01:28:07.624697 containerd[1466]: time="2024-12-13T01:28:07.624639730Z" level=info msg="StartContainer for \"08e46cbb15d9cb529578d4cafe70758366e8294d7ead472d3b9a80e39be085c6\" returns successfully" Dec 13 01:28:08.193374 kubelet[2597]: I1213 01:28:08.191837 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bb84f74c-x8pkp" podStartSLOduration=29.178688158 podStartE2EDuration="37.191751111s" podCreationTimestamp="2024-12-13 01:27:31 +0000 UTC" firstStartedPulling="2024-12-13 01:27:59.510782219 +0000 UTC m=+47.756107324" lastFinishedPulling="2024-12-13 01:28:07.523845182 +0000 UTC m=+55.769170277" observedRunningTime="2024-12-13 01:28:08.176145144 +0000 UTC m=+56.421470239" watchObservedRunningTime="2024-12-13 01:28:08.191751111 +0000 UTC m=+56.437076206" Dec 13 01:28:08.193374 kubelet[2597]: I1213 01:28:08.192181 2597 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bb84f74c-7wpxw" podStartSLOduration=28.319273347 podStartE2EDuration="37.192121549s" podCreationTimestamp="2024-12-13 01:27:31 +0000 UTC" firstStartedPulling="2024-12-13 01:27:58.283686749 +0000 UTC m=+46.529011844" lastFinishedPulling="2024-12-13 01:28:07.156534951 +0000 UTC m=+55.401860046" observedRunningTime="2024-12-13 01:28:08.190569412 +0000 UTC m=+56.435894507" watchObservedRunningTime="2024-12-13 01:28:08.192121549 +0000 UTC m=+56.437446644" Dec 13 01:28:09.167536 kubelet[2597]: I1213 01:28:09.167488 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:09.167536 kubelet[2597]: I1213 01:28:09.167519 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:10.053137 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:48558.service - OpenSSH per-connection server daemon (10.0.0.1:48558). Dec 13 01:28:10.095202 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 48558 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:10.096633 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:10.100805 systemd-logind[1455]: New session 15 of user core. Dec 13 01:28:10.109409 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:28:10.228673 sshd[5461]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:10.233319 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:48558.service: Deactivated successfully. Dec 13 01:28:10.235798 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:28:10.236494 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:28:10.237407 systemd-logind[1455]: Removed session 15. Dec 13 01:28:11.871940 containerd[1466]: time="2024-12-13T01:28:11.871893730Z" level=info msg="StopPodSandbox for \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\"" Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.915 [WARNING][5498] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0", GenerateName:"calico-kube-controllers-674bcff85f-", Namespace:"calico-system", SelfLink:"", UID:"ce304b83-f30f-46db-bfb6-971554b60429", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"674bcff85f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c", Pod:"calico-kube-controllers-674bcff85f-qlkvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5fc1e8a1e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.915 [INFO][5498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.915 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" iface="eth0" netns="" Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.915 [INFO][5498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.915 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.942 [INFO][5507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.942 [INFO][5507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.942 [INFO][5507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.970 [WARNING][5507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.970 [INFO][5507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.973 [INFO][5507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:11.981417 containerd[1466]: 2024-12-13 01:28:11.976 [INFO][5498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:28:11.982085 containerd[1466]: time="2024-12-13T01:28:11.982034188Z" level=info msg="TearDown network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\" successfully" Dec 13 01:28:11.982085 containerd[1466]: time="2024-12-13T01:28:11.982062943Z" level=info msg="StopPodSandbox for \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\" returns successfully" Dec 13 01:28:11.982611 containerd[1466]: time="2024-12-13T01:28:11.982581585Z" level=info msg="RemovePodSandbox for \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\"" Dec 13 01:28:11.984932 containerd[1466]: time="2024-12-13T01:28:11.984903430Z" level=info msg="Forcibly stopping sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\"" Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.062 [WARNING][5530] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0", GenerateName:"calico-kube-controllers-674bcff85f-", Namespace:"calico-system", SelfLink:"", UID:"ce304b83-f30f-46db-bfb6-971554b60429", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"674bcff85f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bf220de69d662dd2257182b78834e813e1434bc98b03bdef2b6449d61bc392c", Pod:"calico-kube-controllers-674bcff85f-qlkvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5fc1e8a1e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.062 [INFO][5530] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.062 [INFO][5530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" iface="eth0" netns="" Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.063 [INFO][5530] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.063 [INFO][5530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.082 [INFO][5537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.082 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.082 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.109 [WARNING][5537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.109 [INFO][5537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" HandleID="k8s-pod-network.0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Workload="localhost-k8s-calico--kube--controllers--674bcff85f--qlkvk-eth0" Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.111 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:12.126598 containerd[1466]: 2024-12-13 01:28:12.113 [INFO][5530] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da" Dec 13 01:28:12.126598 containerd[1466]: time="2024-12-13T01:28:12.125966301Z" level=info msg="TearDown network for sandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\" successfully" Dec 13 01:28:12.144505 containerd[1466]: time="2024-12-13T01:28:12.144417928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:12.144677 containerd[1466]: time="2024-12-13T01:28:12.144564712Z" level=info msg="RemovePodSandbox \"0ca0aef10164844346d675b54af89bcc550b19a40a092b6337464c27b17151da\" returns successfully" Dec 13 01:28:12.145361 containerd[1466]: time="2024-12-13T01:28:12.145277157Z" level=info msg="StopPodSandbox for \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\"" Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.181 [WARNING][5560] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--54ctg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca25e48b-50ec-452e-a7dc-d26850ad2858", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33", Pod:"csi-node-driver-54ctg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif856d57227b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.182 [INFO][5560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.182 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" iface="eth0" netns="" Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.182 [INFO][5560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.182 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.208 [INFO][5568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.208 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.208 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.317 [WARNING][5568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.318 [INFO][5568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.319 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:12.324506 containerd[1466]: 2024-12-13 01:28:12.322 [INFO][5560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:28:12.324957 containerd[1466]: time="2024-12-13T01:28:12.324540021Z" level=info msg="TearDown network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\" successfully" Dec 13 01:28:12.324957 containerd[1466]: time="2024-12-13T01:28:12.324567234Z" level=info msg="StopPodSandbox for \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\" returns successfully" Dec 13 01:28:12.325030 containerd[1466]: time="2024-12-13T01:28:12.325003346Z" level=info msg="RemovePodSandbox for \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\"" Dec 13 01:28:12.325058 containerd[1466]: time="2024-12-13T01:28:12.325036720Z" level=info msg="Forcibly stopping sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\"" Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.477 [WARNING][5592] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--54ctg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca25e48b-50ec-452e-a7dc-d26850ad2858", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e656d4838df7e078d2a5f6b644636ea47e591cdec3ea0d083b654429dd878c33", Pod:"csi-node-driver-54ctg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif856d57227b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.477 [INFO][5592] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.477 [INFO][5592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" iface="eth0" netns="" Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.477 [INFO][5592] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.477 [INFO][5592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.509 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.509 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.509 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.518 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.519 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" HandleID="k8s-pod-network.46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Workload="localhost-k8s-csi--node--driver--54ctg-eth0" Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.521 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:12.528276 containerd[1466]: 2024-12-13 01:28:12.525 [INFO][5592] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1" Dec 13 01:28:12.528855 containerd[1466]: time="2024-12-13T01:28:12.528334359Z" level=info msg="TearDown network for sandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\" successfully" Dec 13 01:28:12.658161 containerd[1466]: time="2024-12-13T01:28:12.658093225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:12.658358 containerd[1466]: time="2024-12-13T01:28:12.658186466Z" level=info msg="RemovePodSandbox \"46bca23b83e9f425b120d9db26212bdbeb7206a8100cd368e748e8b319439dd1\" returns successfully" Dec 13 01:28:12.658749 containerd[1466]: time="2024-12-13T01:28:12.658723872Z" level=info msg="StopPodSandbox for \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\"" Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.696 [WARNING][5642] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mfv2r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9f955d26-c47f-4a21-b33a-e3a989a3e532", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1", Pod:"coredns-76f75df574-mfv2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ab73e61146", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.697 [INFO][5642] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.697 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" iface="eth0" netns="" Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.697 [INFO][5642] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.697 [INFO][5642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.721 [INFO][5649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.721 [INFO][5649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.721 [INFO][5649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.726 [WARNING][5649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.726 [INFO][5649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.728 [INFO][5649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:12.733770 containerd[1466]: 2024-12-13 01:28:12.730 [INFO][5642] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:28:12.734603 containerd[1466]: time="2024-12-13T01:28:12.733826685Z" level=info msg="TearDown network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\" successfully" Dec 13 01:28:12.734603 containerd[1466]: time="2024-12-13T01:28:12.733860190Z" level=info msg="StopPodSandbox for \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\" returns successfully" Dec 13 01:28:12.734603 containerd[1466]: time="2024-12-13T01:28:12.734443586Z" level=info msg="RemovePodSandbox for \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\"" Dec 13 01:28:12.734603 containerd[1466]: time="2024-12-13T01:28:12.734469506Z" level=info msg="Forcibly stopping sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\"" Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.773 [WARNING][5672] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mfv2r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9f955d26-c47f-4a21-b33a-e3a989a3e532", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"139ea0f8278044372bef17e0f015f80e64ee74130faf240b3a85d05428450da1", Pod:"coredns-76f75df574-mfv2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ab73e61146", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.773 [INFO][5672] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.773 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" iface="eth0" netns="" Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.773 [INFO][5672] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.773 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.796 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.797 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.797 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.802 [WARNING][5679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.802 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" HandleID="k8s-pod-network.d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Workload="localhost-k8s-coredns--76f75df574--mfv2r-eth0" Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.803 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:12.811689 containerd[1466]: 2024-12-13 01:28:12.806 [INFO][5672] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61" Dec 13 01:28:12.812194 containerd[1466]: time="2024-12-13T01:28:12.811676711Z" level=info msg="TearDown network for sandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\" successfully" Dec 13 01:28:12.935273 containerd[1466]: time="2024-12-13T01:28:12.935215830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:12.935741 containerd[1466]: time="2024-12-13T01:28:12.935316865Z" level=info msg="RemovePodSandbox \"d0d670e4fc8df0c91f2776d9eb324afebfb407bac56af9baee3bfebf1e3f6d61\" returns successfully" Dec 13 01:28:12.935894 containerd[1466]: time="2024-12-13T01:28:12.935852930Z" level=info msg="StopPodSandbox for \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\"" Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.030 [WARNING][5701] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--x4qc6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c537e295-f131-421b-b6e3-16e9b31f1282", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679", Pod:"coredns-76f75df574-x4qc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali473b359572b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.031 [INFO][5701] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.031 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" iface="eth0" netns="" Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.031 [INFO][5701] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.031 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.063 [INFO][5709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.064 [INFO][5709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.064 [INFO][5709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.069 [WARNING][5709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.069 [INFO][5709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.071 [INFO][5709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:13.077055 containerd[1466]: 2024-12-13 01:28:13.073 [INFO][5701] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:28:13.077055 containerd[1466]: time="2024-12-13T01:28:13.076981698Z" level=info msg="TearDown network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\" successfully" Dec 13 01:28:13.077055 containerd[1466]: time="2024-12-13T01:28:13.077008790Z" level=info msg="StopPodSandbox for \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\" returns successfully" Dec 13 01:28:13.077578 containerd[1466]: time="2024-12-13T01:28:13.077523724Z" level=info msg="RemovePodSandbox for \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\"" Dec 13 01:28:13.077578 containerd[1466]: time="2024-12-13T01:28:13.077548351Z" level=info msg="Forcibly stopping sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\"" Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.163 [WARNING][5731] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--x4qc6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c537e295-f131-421b-b6e3-16e9b31f1282", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"578f4b2dc10cf9fcc0ceb94ae5a820aa1fb503ce3bb0fac7f73536cba54cf679", Pod:"coredns-76f75df574-x4qc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali473b359572b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.163 [INFO][5731] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.164 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" iface="eth0" netns="" Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.164 [INFO][5731] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.164 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.192 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.193 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.193 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.198 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.198 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" HandleID="k8s-pod-network.1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Workload="localhost-k8s-coredns--76f75df574--x4qc6-eth0" Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.200 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:13.208192 containerd[1466]: 2024-12-13 01:28:13.203 [INFO][5731] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4" Dec 13 01:28:13.208192 containerd[1466]: time="2024-12-13T01:28:13.205946938Z" level=info msg="TearDown network for sandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\" successfully" Dec 13 01:28:13.394274 containerd[1466]: time="2024-12-13T01:28:13.394070073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:13.394274 containerd[1466]: time="2024-12-13T01:28:13.394167541Z" level=info msg="RemovePodSandbox \"1057000e128f75684cb97e93b2a611853b8675944ec4c26909f7d8619c794ba4\" returns successfully" Dec 13 01:28:13.395094 containerd[1466]: time="2024-12-13T01:28:13.394771045Z" level=info msg="StopPodSandbox for \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\"" Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.436 [WARNING][5762] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0", GenerateName:"calico-apiserver-7bb84f74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecc8a64-c7e5-403b-881c-5253c8b42a23", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb84f74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14", Pod:"calico-apiserver-7bb84f74c-x8pkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali129d8356049", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.436 [INFO][5762] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.436 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" iface="eth0" netns="" Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.436 [INFO][5762] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.436 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.460 [INFO][5769] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.460 [INFO][5769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.460 [INFO][5769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.467 [WARNING][5769] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.467 [INFO][5769] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.469 [INFO][5769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:13.474751 containerd[1466]: 2024-12-13 01:28:13.471 [INFO][5762] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:28:13.475532 containerd[1466]: time="2024-12-13T01:28:13.474810666Z" level=info msg="TearDown network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\" successfully" Dec 13 01:28:13.475532 containerd[1466]: time="2024-12-13T01:28:13.474841687Z" level=info msg="StopPodSandbox for \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\" returns successfully" Dec 13 01:28:13.475532 containerd[1466]: time="2024-12-13T01:28:13.475470600Z" level=info msg="RemovePodSandbox for \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\"" Dec 13 01:28:13.475532 containerd[1466]: time="2024-12-13T01:28:13.475513072Z" level=info msg="Forcibly stopping sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\"" Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.514 [WARNING][5792] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0", GenerateName:"calico-apiserver-7bb84f74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecc8a64-c7e5-403b-881c-5253c8b42a23", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb84f74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e7846d867cefcd5600292b18a8effd27204d5c957c91800fe1e9f1489acde14", Pod:"calico-apiserver-7bb84f74c-x8pkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali129d8356049", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.514 [INFO][5792] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.514 [INFO][5792] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" iface="eth0" netns="" Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.514 [INFO][5792] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.514 [INFO][5792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.540 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.540 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.540 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.546 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.546 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" HandleID="k8s-pod-network.4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Workload="localhost-k8s-calico--apiserver--7bb84f74c--x8pkp-eth0" Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.548 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:13.553844 containerd[1466]: 2024-12-13 01:28:13.550 [INFO][5792] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71" Dec 13 01:28:13.554406 containerd[1466]: time="2024-12-13T01:28:13.553901749Z" level=info msg="TearDown network for sandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\" successfully" Dec 13 01:28:13.558411 containerd[1466]: time="2024-12-13T01:28:13.558340841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:13.558512 containerd[1466]: time="2024-12-13T01:28:13.558432438Z" level=info msg="RemovePodSandbox \"4f94e70902c68ef07ad579e99d5d65d6bbfd251d2dcac86594c7227ace04ea71\" returns successfully" Dec 13 01:28:13.559076 containerd[1466]: time="2024-12-13T01:28:13.559031904Z" level=info msg="StopPodSandbox for \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\"" Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.608 [WARNING][5821] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0", GenerateName:"calico-apiserver-7bb84f74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"693e3e7a-b788-4c48-8270-e5f57917bed1", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb84f74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349", Pod:"calico-apiserver-7bb84f74c-7wpxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2486f718283", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.608 [INFO][5821] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.608 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" iface="eth0" netns="" Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.608 [INFO][5821] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.608 [INFO][5821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.634 [INFO][5828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.634 [INFO][5828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.634 [INFO][5828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.643 [WARNING][5828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.643 [INFO][5828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.645 [INFO][5828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:13.651956 containerd[1466]: 2024-12-13 01:28:13.648 [INFO][5821] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:28:13.651956 containerd[1466]: time="2024-12-13T01:28:13.651886754Z" level=info msg="TearDown network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\" successfully" Dec 13 01:28:13.651956 containerd[1466]: time="2024-12-13T01:28:13.651921561Z" level=info msg="StopPodSandbox for \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\" returns successfully" Dec 13 01:28:13.652648 containerd[1466]: time="2024-12-13T01:28:13.652587655Z" level=info msg="RemovePodSandbox for \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\"" Dec 13 01:28:13.652648 containerd[1466]: time="2024-12-13T01:28:13.652648091Z" level=info msg="Forcibly stopping sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\"" Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.700 [WARNING][5851] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0", GenerateName:"calico-apiserver-7bb84f74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"693e3e7a-b788-4c48-8270-e5f57917bed1", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb84f74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12e2b02ae7916de32b113f85daf69b71e2f6da6c995bc662fd75fec72a7b3349", Pod:"calico-apiserver-7bb84f74c-7wpxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2486f718283", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.700 [INFO][5851] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.700 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" iface="eth0" netns="" Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.700 [INFO][5851] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.700 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.730 [INFO][5858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.730 [INFO][5858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.730 [INFO][5858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.768 [WARNING][5858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.768 [INFO][5858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" HandleID="k8s-pod-network.55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Workload="localhost-k8s-calico--apiserver--7bb84f74c--7wpxw-eth0" Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.786 [INFO][5858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:13.793665 containerd[1466]: 2024-12-13 01:28:13.790 [INFO][5851] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846" Dec 13 01:28:13.794261 containerd[1466]: time="2024-12-13T01:28:13.793718080Z" level=info msg="TearDown network for sandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\" successfully" Dec 13 01:28:13.822685 containerd[1466]: time="2024-12-13T01:28:13.822625426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:13.822883 containerd[1466]: time="2024-12-13T01:28:13.822724387Z" level=info msg="RemovePodSandbox \"55ed74b45055848cb39e696dcb77fd062771f9895a02c6f8e9caf4f5d9f8e846\" returns successfully" Dec 13 01:28:15.244981 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:48564.service - OpenSSH per-connection server daemon (10.0.0.1:48564). Dec 13 01:28:15.296071 sshd[5867]: Accepted publickey for core from 10.0.0.1 port 48564 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:15.297939 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:15.302809 systemd-logind[1455]: New session 16 of user core. Dec 13 01:28:15.315524 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:28:15.441539 sshd[5867]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:15.446383 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:48564.service: Deactivated successfully. Dec 13 01:28:15.448444 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:28:15.449194 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:28:15.450389 systemd-logind[1455]: Removed session 16. Dec 13 01:28:20.474846 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:53142.service - OpenSSH per-connection server daemon (10.0.0.1:53142). Dec 13 01:28:20.521481 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 53142 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:20.524121 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:20.536059 systemd-logind[1455]: New session 17 of user core. Dec 13 01:28:20.544738 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:28:20.721917 sshd[5881]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:20.736228 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:53142.service: Deactivated successfully. Dec 13 01:28:20.739234 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:28:20.742336 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:28:20.750915 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:53144.service - OpenSSH per-connection server daemon (10.0.0.1:53144). Dec 13 01:28:20.752344 systemd-logind[1455]: Removed session 17. Dec 13 01:28:20.794930 sshd[5895]: Accepted publickey for core from 10.0.0.1 port 53144 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:20.795876 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:20.805028 systemd-logind[1455]: New session 18 of user core. Dec 13 01:28:20.812763 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:28:21.404370 sshd[5895]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:21.425776 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:53144.service: Deactivated successfully. Dec 13 01:28:21.430507 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:28:21.434590 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:28:21.441988 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:53146.service - OpenSSH per-connection server daemon (10.0.0.1:53146). Dec 13 01:28:21.443649 systemd-logind[1455]: Removed session 18. Dec 13 01:28:21.498094 sshd[5915]: Accepted publickey for core from 10.0.0.1 port 53146 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:21.500644 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:21.515914 systemd-logind[1455]: New session 19 of user core. Dec 13 01:28:21.534183 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:28:22.246862 kubelet[2597]: E1213 01:28:22.246794 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:24.451403 sshd[5915]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:24.461851 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:53146.service: Deactivated successfully. Dec 13 01:28:24.464650 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:28:24.465790 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:28:24.476617 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:53148.service - OpenSSH per-connection server daemon (10.0.0.1:53148). Dec 13 01:28:24.480052 systemd-logind[1455]: Removed session 19. Dec 13 01:28:24.553159 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 53148 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:24.556164 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:24.563691 systemd-logind[1455]: New session 20 of user core. Dec 13 01:28:24.573756 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:28:24.894497 sshd[5958]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:24.907010 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:53148.service: Deactivated successfully. Dec 13 01:28:24.911054 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:28:24.913891 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:28:24.921065 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:53164.service - OpenSSH per-connection server daemon (10.0.0.1:53164). Dec 13 01:28:24.923949 systemd-logind[1455]: Removed session 20. Dec 13 01:28:24.963965 sshd[5970]: Accepted publickey for core from 10.0.0.1 port 53164 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:24.967014 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:24.975724 systemd-logind[1455]: New session 21 of user core. Dec 13 01:28:24.981670 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:28:25.125690 sshd[5970]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:25.132564 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:53164.service: Deactivated successfully. Dec 13 01:28:25.135864 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:28:25.136892 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:28:25.138199 systemd-logind[1455]: Removed session 21. Dec 13 01:28:25.884077 kubelet[2597]: E1213 01:28:25.883976 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:30.138852 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:60430.service - OpenSSH per-connection server daemon (10.0.0.1:60430). Dec 13 01:28:30.181698 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 60430 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:30.184250 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:30.189824 systemd-logind[1455]: New session 22 of user core. Dec 13 01:28:30.204530 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:28:30.333415 sshd[5986]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:30.340476 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:60430.service: Deactivated successfully. Dec 13 01:28:30.343622 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:28:30.344752 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:28:30.345973 systemd-logind[1455]: Removed session 22. Dec 13 01:28:30.884534 kubelet[2597]: E1213 01:28:30.884477 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:35.285006 kubelet[2597]: I1213 01:28:35.284953 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:35.352718 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:60440.service - OpenSSH per-connection server daemon (10.0.0.1:60440). Dec 13 01:28:35.396816 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 60440 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:35.398964 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:35.404024 systemd-logind[1455]: New session 23 of user core. Dec 13 01:28:35.412727 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:28:35.533167 sshd[6026]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:35.538410 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:60440.service: Deactivated successfully. Dec 13 01:28:35.540306 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:28:35.540953 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:28:35.541987 systemd-logind[1455]: Removed session 23. Dec 13 01:28:38.936131 kubelet[2597]: I1213 01:28:38.936077 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:40.546376 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:37692.service - OpenSSH per-connection server daemon (10.0.0.1:37692). Dec 13 01:28:40.594412 sshd[6042]: Accepted publickey for core from 10.0.0.1 port 37692 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:40.596916 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:40.602577 systemd-logind[1455]: New session 24 of user core. Dec 13 01:28:40.611598 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:28:40.748428 sshd[6042]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:40.754243 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:37692.service: Deactivated successfully. Dec 13 01:28:40.757927 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:28:40.758903 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:28:40.760123 systemd-logind[1455]: Removed session 24. Dec 13 01:28:42.884456 kubelet[2597]: E1213 01:28:42.884169 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:44.884000 kubelet[2597]: E1213 01:28:44.883963 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:45.768577 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:37702.service - OpenSSH per-connection server daemon (10.0.0.1:37702). Dec 13 01:28:45.808329 sshd[6084]: Accepted publickey for core from 10.0.0.1 port 37702 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:45.810667 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:45.817234 systemd-logind[1455]: New session 25 of user core. Dec 13 01:28:45.822522 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:28:46.038855 sshd[6084]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:46.044471 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:37702.service: Deactivated successfully. Dec 13 01:28:46.048129 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:28:46.049396 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:28:46.051371 systemd-logind[1455]: Removed session 25.