Dec 13 01:07:50.884322 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:07:50.884349 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:07:50.884363 kernel: BIOS-provided physical RAM map: Dec 13 01:07:50.884372 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:07:50.884380 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:07:50.884388 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:07:50.884397 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:07:50.884406 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:07:50.884414 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:07:50.884425 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:07:50.884434 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:07:50.884473 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:07:50.884482 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:07:50.884492 kernel: NX (Execute Disable) protection: active Dec 13 01:07:50.884501 kernel: APIC: Static calls initialized Dec 13 01:07:50.884522 kernel: SMBIOS 2.8 present. Dec 13 01:07:50.884531 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:07:50.884540 kernel: Hypervisor detected: KVM Dec 13 01:07:50.884549 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:07:50.884557 kernel: kvm-clock: using sched offset of 2279307155 cycles Dec 13 01:07:50.884567 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:07:50.884576 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:07:50.884585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:07:50.884595 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:07:50.884604 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:07:50.884617 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:07:50.884626 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:07:50.884635 kernel: Using GB pages for direct mapping Dec 13 01:07:50.884645 kernel: ACPI: Early table checksum verification disabled Dec 13 01:07:50.884654 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:07:50.884663 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:50.884672 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:50.884681 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:50.884694 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:07:50.884710 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:50.884720 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:50.884729 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:50.884738 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:50.884746 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:07:50.884756 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:07:50.884770 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:07:50.884782 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:07:50.884791 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:07:50.884801 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:07:50.884811 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:07:50.884827 kernel: No NUMA configuration found Dec 13 01:07:50.884839 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:07:50.884849 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:07:50.884864 kernel: Zone ranges: Dec 13 01:07:50.884873 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:07:50.884883 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:07:50.884892 kernel: Normal empty Dec 13 01:07:50.884902 kernel: Movable zone start for each node Dec 13 01:07:50.884911 kernel: Early memory node ranges Dec 13 01:07:50.884921 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:07:50.884935 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:07:50.884945 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:07:50.884960 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:07:50.884969 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:07:50.884979 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:07:50.884988 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:07:50.884998 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:07:50.885007 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:07:50.885017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:07:50.885026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:07:50.885036 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:07:50.885048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:07:50.885057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:07:50.885067 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:07:50.885077 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:07:50.885086 kernel: TSC deadline timer available Dec 13 01:07:50.885095 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:07:50.885105 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:07:50.885114 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:07:50.885124 kernel: kvm-guest: setup PV sched yield Dec 13 01:07:50.885137 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:07:50.885146 kernel: Booting paravirtualized kernel on KVM Dec 13 01:07:50.885156 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:07:50.885166 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:07:50.885176 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:07:50.885185 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:07:50.885195 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:07:50.885204 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:07:50.885213 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:07:50.885227 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:07:50.885237 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:07:50.885247 kernel: random: crng init done Dec 13 01:07:50.885256 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:07:50.885266 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:07:50.885276 kernel: Fallback order for Node 0: 0 Dec 13 01:07:50.885285 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:07:50.885295 kernel: Policy zone: DMA32 Dec 13 01:07:50.885305 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:07:50.885319 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:07:50.885329 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:07:50.885338 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:07:50.885348 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:07:50.885357 kernel: Dynamic Preempt: voluntary Dec 13 01:07:50.885367 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:07:50.885377 kernel: rcu: RCU event tracing is enabled. Dec 13 01:07:50.885387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:07:50.885397 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:07:50.885409 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:07:50.885418 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:07:50.885428 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:07:50.885520 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:07:50.885528 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:07:50.885535 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:07:50.885542 kernel: Console: colour VGA+ 80x25 Dec 13 01:07:50.885549 kernel: printk: console [ttyS0] enabled Dec 13 01:07:50.885556 kernel: ACPI: Core revision 20230628 Dec 13 01:07:50.885566 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:07:50.885573 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:07:50.885580 kernel: x2apic enabled Dec 13 01:07:50.885587 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:07:50.885594 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:07:50.885602 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:07:50.885609 kernel: kvm-guest: setup PV IPIs Dec 13 01:07:50.885626 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:07:50.885633 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:07:50.885641 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:07:50.885648 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:07:50.885658 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:07:50.885665 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:07:50.885673 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:07:50.885680 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:07:50.885687 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:07:50.885697 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:07:50.885705 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:07:50.885712 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:07:50.885719 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:07:50.885727 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:07:50.885734 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:07:50.885742 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:07:50.885750 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:07:50.885759 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:07:50.885767 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:07:50.885774 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:07:50.885782 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:07:50.885789 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:07:50.885797 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:07:50.885804 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:07:50.885811 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:07:50.885819 kernel: landlock: Up and running. Dec 13 01:07:50.885828 kernel: SELinux: Initializing. Dec 13 01:07:50.885836 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:07:50.885843 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:07:50.885851 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:07:50.885858 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:07:50.885866 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:07:50.885873 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:07:50.885881 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:07:50.885888 kernel: ... version: 0 Dec 13 01:07:50.885897 kernel: ... bit width: 48 Dec 13 01:07:50.885905 kernel: ... generic registers: 6 Dec 13 01:07:50.885912 kernel: ... value mask: 0000ffffffffffff Dec 13 01:07:50.885919 kernel: ... max period: 00007fffffffffff Dec 13 01:07:50.885927 kernel: ... fixed-purpose events: 0 Dec 13 01:07:50.885934 kernel: ... event mask: 000000000000003f Dec 13 01:07:50.885941 kernel: signal: max sigframe size: 1776 Dec 13 01:07:50.885948 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:07:50.885956 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:07:50.885966 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:07:50.885973 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:07:50.885980 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:07:50.885988 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:07:50.885995 kernel: smpboot: Max logical packages: 1 Dec 13 01:07:50.886002 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:07:50.886010 kernel: devtmpfs: initialized Dec 13 01:07:50.886017 kernel: x86/mm: Memory block size: 128MB Dec 13 01:07:50.886024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:07:50.886032 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:07:50.886041 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:07:50.886049 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:07:50.886056 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:07:50.886064 kernel: audit: type=2000 audit(1734052069.434:1): state=initialized audit_enabled=0 res=1 Dec 13 01:07:50.886071 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:07:50.886078 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:07:50.886085 kernel: cpuidle: using governor menu Dec 13 01:07:50.886093 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:07:50.886100 kernel: dca service started, version 1.12.1 Dec 13 01:07:50.886110 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:07:50.886118 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:07:50.886125 kernel: PCI: Using configuration type 1 for base access Dec 13 01:07:50.886133 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:07:50.886140 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:07:50.886147 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:07:50.886155 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:07:50.886162 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:07:50.886172 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:07:50.886181 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:07:50.886191 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:07:50.886202 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:07:50.886212 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:07:50.886222 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:07:50.886229 kernel: ACPI: Interpreter enabled Dec 13 01:07:50.886237 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:07:50.886244 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:07:50.886252 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:07:50.886262 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:07:50.886269 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:07:50.886280 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:07:50.886482 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:07:50.886620 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:07:50.886742 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:07:50.886752 kernel: PCI host bridge to bus 0000:00 Dec 13 01:07:50.886881 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:07:50.886993 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:07:50.887103 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:07:50.887212 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:07:50.887338 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:07:50.887468 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:07:50.887590 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:07:50.887735 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:07:50.887870 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:07:50.887999 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:07:50.888122 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:07:50.888247 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:07:50.888395 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:07:50.888559 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:07:50.888684 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:07:50.888804 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:07:50.888923 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:07:50.889053 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:07:50.889173 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:07:50.889301 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:07:50.889451 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:07:50.889602 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:07:50.889724 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:07:50.889845 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:07:50.889964 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:07:50.890083 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:07:50.890220 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:07:50.890363 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:07:50.890602 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:07:50.890725 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:07:50.890845 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:07:50.890973 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:07:50.891101 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:07:50.891117 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:07:50.891125 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:07:50.891133 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:07:50.891141 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:07:50.891148 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:07:50.891156 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:07:50.891163 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:07:50.891171 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:07:50.891178 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:07:50.891188 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:07:50.891196 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:07:50.891203 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:07:50.891211 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:07:50.891219 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:07:50.891226 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:07:50.891234 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:07:50.891241 kernel: iommu: Default domain type: Translated Dec 13 01:07:50.891249 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:07:50.891259 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:07:50.891266 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:07:50.891275 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:07:50.891285 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:07:50.891413 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:07:50.891553 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:07:50.891686 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:07:50.891698 kernel: vgaarb: loaded Dec 13 01:07:50.891709 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:07:50.891717 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:07:50.891725 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:07:50.891732 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:07:50.891740 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:07:50.891748 kernel: pnp: PnP ACPI init Dec 13 01:07:50.891885 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:07:50.891897 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:07:50.891904 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:07:50.891915 kernel: NET: Registered PF_INET protocol family Dec 13 01:07:50.891923 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:07:50.891930 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:07:50.891938 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:07:50.891946 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:07:50.891953 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:07:50.891961 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:07:50.891968 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:07:50.891978 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:07:50.891986 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:07:50.891994 kernel: NET: Registered PF_XDP protocol family Dec 13 01:07:50.892104 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:07:50.892215 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:07:50.892332 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:07:50.892457 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:07:50.892577 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:07:50.892697 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:07:50.892713 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:07:50.892721 kernel: Initialise system trusted keyrings Dec 13 01:07:50.892728 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:07:50.892736 kernel: Key type asymmetric registered Dec 13 01:07:50.892744 kernel: Asymmetric key parser 'x509' registered Dec 13 01:07:50.892751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:07:50.892759 kernel: io scheduler mq-deadline registered Dec 13 01:07:50.892766 kernel: io scheduler kyber registered Dec 13 01:07:50.892774 kernel: io scheduler bfq registered Dec 13 01:07:50.892784 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:07:50.892792 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:07:50.892800 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:07:50.892813 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:07:50.892821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:07:50.892829 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:07:50.892839 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:07:50.892847 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:07:50.892855 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:07:50.892985 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:07:50.892997 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:07:50.893109 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:07:50.893222 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:07:50 UTC (1734052070) Dec 13 01:07:50.893344 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:07:50.893356 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:07:50.893363 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:07:50.893371 kernel: Segment Routing with IPv6 Dec 13 01:07:50.893382 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:07:50.893390 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:07:50.893397 kernel: Key type dns_resolver registered Dec 13 01:07:50.893405 kernel: IPI shorthand broadcast: enabled Dec 13 01:07:50.893412 kernel: sched_clock: Marking stable (624002713, 122536913)->(801444915, -54905289) Dec 13 01:07:50.893420 kernel: registered taskstats version 1 Dec 13 01:07:50.893427 kernel: Loading compiled-in X.509 certificates Dec 13 01:07:50.893450 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:07:50.893458 kernel: Key type .fscrypt registered Dec 13 01:07:50.893469 kernel: Key type fscrypt-provisioning registered Dec 13 01:07:50.893476 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:07:50.893484 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:07:50.893491 kernel: ima: No architecture policies found Dec 13 01:07:50.893499 kernel: clk: Disabling unused clocks Dec 13 01:07:50.893506 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:07:50.893520 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:07:50.893527 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:07:50.893537 kernel: Run /init as init process Dec 13 01:07:50.893545 kernel: with arguments: Dec 13 01:07:50.893552 kernel: /init Dec 13 01:07:50.893560 kernel: with environment: Dec 13 01:07:50.893567 kernel: HOME=/ Dec 13 01:07:50.893574 kernel: TERM=linux Dec 13 01:07:50.893582 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:07:50.893591 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:07:50.893601 systemd[1]: Detected virtualization kvm. Dec 13 01:07:50.893611 systemd[1]: Detected architecture x86-64. Dec 13 01:07:50.893619 systemd[1]: Running in initrd. Dec 13 01:07:50.893627 systemd[1]: No hostname configured, using default hostname. Dec 13 01:07:50.893634 systemd[1]: Hostname set to . Dec 13 01:07:50.893643 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:07:50.893651 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:07:50.893659 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:07:50.893667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:07:50.893679 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:07:50.893698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:07:50.893709 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:07:50.893718 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:07:50.893728 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:07:50.893743 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:07:50.893755 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:07:50.893766 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:07:50.893778 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:07:50.893788 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:07:50.893796 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:07:50.893804 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:07:50.893812 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:07:50.893824 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:07:50.893832 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:07:50.893842 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:07:50.893851 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:07:50.893861 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:07:50.893870 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:07:50.893879 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:07:50.893887 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:07:50.893897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:07:50.893905 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:07:50.893913 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:07:50.893922 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:07:50.893930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:07:50.893938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:07:50.893946 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:07:50.893954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:07:50.893963 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:07:50.893994 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:07:50.894015 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:07:50.894026 systemd-journald[193]: Journal started Dec 13 01:07:50.894046 systemd-journald[193]: Runtime Journal (/run/log/journal/ae87a35ce74a4e5f8b69593a5f5d61ba) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:07:50.898161 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:07:50.922741 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:07:50.924850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:50.927901 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:07:50.931606 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:07:50.932266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:07:50.937197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:07:50.947460 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:07:50.951209 kernel: Bridge firewalling registered Dec 13 01:07:50.950500 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:07:50.950901 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:07:50.953287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:07:50.958650 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:07:50.959472 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:07:50.961349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:07:50.965768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:07:50.977464 dracut-cmdline[220]: dracut-dracut-053 Dec 13 01:07:50.978436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:07:50.983288 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:07:50.990635 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:07:51.023360 systemd-resolved[236]: Positive Trust Anchors: Dec 13 01:07:51.023374 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:07:51.023406 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:07:51.026547 systemd-resolved[236]: Defaulting to hostname 'linux'. Dec 13 01:07:51.027828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:07:51.033011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:07:51.078497 kernel: SCSI subsystem initialized Dec 13 01:07:51.088477 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:07:51.099489 kernel: iscsi: registered transport (tcp) Dec 13 01:07:51.119802 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:07:51.119876 kernel: QLogic iSCSI HBA Driver Dec 13 01:07:51.164645 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:07:51.176648 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:07:51.205196 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:07:51.205264 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:07:51.205280 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:07:51.248473 kernel: raid6: avx2x4 gen() 30333 MB/s Dec 13 01:07:51.265471 kernel: raid6: avx2x2 gen() 30977 MB/s Dec 13 01:07:51.282712 kernel: raid6: avx2x1 gen() 23989 MB/s Dec 13 01:07:51.282794 kernel: raid6: using algorithm avx2x2 gen() 30977 MB/s Dec 13 01:07:51.300638 kernel: raid6: .... xor() 18018 MB/s, rmw enabled Dec 13 01:07:51.300729 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:07:51.321493 kernel: xor: automatically using best checksumming function avx Dec 13 01:07:51.475474 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:07:51.490413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:07:51.506682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:07:51.519001 systemd-udevd[413]: Using default interface naming scheme 'v255'. Dec 13 01:07:51.523736 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:07:51.533657 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:07:51.550573 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Dec 13 01:07:51.583872 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:07:51.595619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:07:51.661147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:07:51.674838 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:07:51.686039 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:07:51.688747 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:07:51.691155 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:07:51.693642 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:07:51.701547 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:07:51.701615 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:07:51.706469 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:07:51.734589 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:07:51.734761 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:07:51.734774 kernel: GPT:9289727 != 19775487 Dec 13 01:07:51.734784 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:07:51.734795 kernel: GPT:9289727 != 19775487 Dec 13 01:07:51.734805 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:07:51.734815 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:07:51.718640 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:07:51.718713 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:07:51.720536 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:07:51.721733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:07:51.721791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:51.725732 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:07:51.745461 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:07:51.745491 kernel: libata version 3.00 loaded. Dec 13 01:07:51.745511 kernel: AES CTR mode by8 optimization enabled Dec 13 01:07:51.748668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:07:51.749398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:07:51.756475 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:07:51.772531 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:07:51.772547 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:07:51.772695 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:07:51.772832 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) Dec 13 01:07:51.772843 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (460) Dec 13 01:07:51.772854 kernel: scsi host0: ahci Dec 13 01:07:51.773008 kernel: scsi host1: ahci Dec 13 01:07:51.773154 kernel: scsi host2: ahci Dec 13 01:07:51.773297 kernel: scsi host3: ahci Dec 13 01:07:51.773483 kernel: scsi host4: ahci Dec 13 01:07:51.773634 kernel: scsi host5: ahci Dec 13 01:07:51.773774 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:07:51.773786 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:07:51.773799 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:07:51.773816 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:07:51.773829 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:07:51.773839 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:07:51.767033 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:07:51.810674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:51.823321 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:07:51.828184 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:07:51.832347 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:07:51.832788 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:07:51.847591 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:07:51.848737 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:07:51.862690 disk-uuid[557]: Primary Header is updated. Dec 13 01:07:51.862690 disk-uuid[557]: Secondary Entries is updated. Dec 13 01:07:51.862690 disk-uuid[557]: Secondary Header is updated. Dec 13 01:07:51.866505 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:07:51.868618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:07:51.872309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:07:52.082415 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:52.082536 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:52.082548 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:52.082558 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:52.083471 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:07:52.084465 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:52.085473 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:07:52.085506 kernel: ata3.00: applying bridge limits Dec 13 01:07:52.086512 kernel: ata3.00: configured for UDMA/100 Dec 13 01:07:52.087468 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:07:52.135472 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:07:52.149226 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:07:52.149244 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:07:52.871482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:07:52.872469 disk-uuid[565]: The operation has completed successfully. Dec 13 01:07:52.900516 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:07:52.900656 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:07:52.921697 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:07:52.927395 sh[593]: Success Dec 13 01:07:52.941468 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:07:52.974171 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:07:53.000259 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:07:53.005231 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:07:53.016811 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:07:53.016839 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:07:53.016851 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:07:53.017846 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:07:53.018613 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:07:53.023723 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:07:53.024935 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:07:53.029595 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:07:53.032260 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:07:53.040461 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:53.042520 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:07:53.042563 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:07:53.044741 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:07:53.054313 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:07:53.056223 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:53.065937 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:07:53.071763 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:07:53.127999 ignition[682]: Ignition 2.19.0 Dec 13 01:07:53.128014 ignition[682]: Stage: fetch-offline Dec 13 01:07:53.128068 ignition[682]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:53.128082 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:53.128209 ignition[682]: parsed url from cmdline: "" Dec 13 01:07:53.128214 ignition[682]: no config URL provided Dec 13 01:07:53.128221 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:07:53.128234 ignition[682]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:07:53.128269 ignition[682]: op(1): [started] loading QEMU firmware config module Dec 13 01:07:53.128276 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:07:53.138624 ignition[682]: op(1): [finished] loading QEMU firmware config module Dec 13 01:07:53.162469 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:07:53.168622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:07:53.183646 ignition[682]: parsing config with SHA512: c9d3bba5aac567f3386ccdc6afe003a6869ed5d8f2abb461b1e2130804900bf253bb42ee885537b5506cc452e28b392b52067552bbd37967c20205d24bd4bcdc Dec 13 01:07:53.188996 unknown[682]: fetched base config from "system" Dec 13 01:07:53.189008 unknown[682]: fetched user config from "qemu" Dec 13 01:07:53.189373 ignition[682]: fetch-offline: fetch-offline passed Dec 13 01:07:53.189525 ignition[682]: Ignition finished successfully Dec 13 01:07:53.192524 systemd-networkd[781]: lo: Link UP Dec 13 01:07:53.192528 systemd-networkd[781]: lo: Gained carrier Dec 13 01:07:53.194101 systemd-networkd[781]: Enumeration completed Dec 13 01:07:53.194315 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:07:53.194555 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:07:53.194559 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:07:53.195922 systemd-networkd[781]: eth0: Link UP Dec 13 01:07:53.195927 systemd-networkd[781]: eth0: Gained carrier Dec 13 01:07:53.195934 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:07:53.197855 systemd[1]: Reached target network.target - Network. Dec 13 01:07:53.205523 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:07:53.205950 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:07:53.223526 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:07:53.223701 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:07:53.239879 ignition[784]: Ignition 2.19.0 Dec 13 01:07:53.239890 ignition[784]: Stage: kargs Dec 13 01:07:53.240082 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:53.240093 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:53.243935 ignition[784]: kargs: kargs passed Dec 13 01:07:53.243987 ignition[784]: Ignition finished successfully Dec 13 01:07:53.248608 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:07:53.261580 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:07:53.273153 ignition[793]: Ignition 2.19.0 Dec 13 01:07:53.273164 ignition[793]: Stage: disks Dec 13 01:07:53.273334 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:53.273346 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:53.274172 ignition[793]: disks: disks passed Dec 13 01:07:53.276239 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:07:53.274210 ignition[793]: Ignition finished successfully Dec 13 01:07:53.277711 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:07:53.279248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:07:53.281391 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:07:53.282423 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:07:53.284159 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:07:53.297572 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:07:53.308724 systemd-resolved[236]: Detected conflict on linux IN A 10.0.0.52 Dec 13 01:07:53.308740 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Dec 13 01:07:53.331142 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:07:53.415457 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:07:53.424529 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:07:53.513480 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:07:53.514110 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:07:53.515910 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:07:53.527530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:07:53.529180 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:07:53.530280 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:07:53.535574 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Dec 13 01:07:53.530317 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:07:53.541478 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:53.542348 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:07:53.542361 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:07:53.542372 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:07:53.530338 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:07:53.539360 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:07:53.543458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:07:53.546296 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:07:53.585748 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:07:53.589749 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:07:53.594656 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:07:53.599256 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:07:53.684150 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:07:53.701553 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:07:53.703683 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:07:53.710465 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:53.730305 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:07:53.736425 ignition[925]: INFO : Ignition 2.19.0 Dec 13 01:07:53.736425 ignition[925]: INFO : Stage: mount Dec 13 01:07:53.738046 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:53.738046 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:53.738046 ignition[925]: INFO : mount: mount passed Dec 13 01:07:53.738046 ignition[925]: INFO : Ignition finished successfully Dec 13 01:07:53.744003 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:07:53.754699 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:07:54.016604 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:07:54.033723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:07:54.040464 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Dec 13 01:07:54.040498 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:54.043088 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:07:54.043111 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:07:54.045473 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:07:54.047164 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:07:54.067571 ignition[956]: INFO : Ignition 2.19.0 Dec 13 01:07:54.067571 ignition[956]: INFO : Stage: files Dec 13 01:07:54.069344 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:54.069344 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:54.069344 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:07:54.072825 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:07:54.072825 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:07:54.075684 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:07:54.075684 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:07:54.075684 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:07:54.075684 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:07:54.075684 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:07:54.073957 unknown[956]: wrote ssh authorized keys file for user: core Dec 13 01:07:54.115678 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:07:54.189242 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:07:54.189242 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:07:54.193495 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:07:54.193495 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:07:54.193495 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:07:54.198550 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:07:54.200362 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:07:54.202115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:07:54.203908 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:07:54.205805 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:07:54.207660 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:07:54.209417 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:07:54.211963 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:07:54.214398 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:07:54.216566 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:07:54.634816 systemd-networkd[781]: eth0: Gained IPv6LL Dec 13 01:07:54.704559 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:07:55.016149 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:07:55.016149 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:07:55.020217 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:07:55.022702 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:07:55.022702 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:07:55.026142 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:07:55.026142 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:07:55.029788 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:07:55.029788 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:07:55.033319 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:07:55.054904 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:07:55.060181 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:07:55.061862 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:07:55.061862 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:07:55.064568 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:07:55.065992 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:07:55.067779 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:07:55.069420 ignition[956]: INFO : files: files passed Dec 13 01:07:55.070185 ignition[956]: INFO : Ignition finished successfully Dec 13 01:07:55.073179 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:07:55.078682 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:07:55.081557 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:07:55.084519 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:07:55.085547 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:07:55.091501 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:07:55.095625 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:07:55.095625 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:07:55.098938 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:07:55.098102 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:07:55.100393 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:07:55.114602 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:07:55.138073 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:07:55.138202 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:07:55.139074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:07:55.141756 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:07:55.142110 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:07:55.145783 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:07:55.163571 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:07:55.170662 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:07:55.182462 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:07:55.185184 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:07:55.185936 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:07:55.188178 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:07:55.188321 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:07:55.191908 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:07:55.194315 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:07:55.196413 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:07:55.197020 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:07:55.200160 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:07:55.200809 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:07:55.205109 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:07:55.205513 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:07:55.210793 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:07:55.211392 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:07:55.214423 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:07:55.214601 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:07:55.218093 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:07:55.218933 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:07:55.221984 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:07:55.222108 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:07:55.224927 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:07:55.225075 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:07:55.229486 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:07:55.229641 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:07:55.230170 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:07:55.233316 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:07:55.237124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:07:55.241090 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:07:55.242913 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:07:55.244967 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:07:55.246041 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:07:55.248414 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:07:55.249527 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:07:55.251961 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:07:55.253382 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:07:55.256420 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:07:55.257643 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:07:55.269685 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:07:55.272848 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:07:55.274945 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:07:55.275812 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:07:55.278119 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:07:55.282365 ignition[1011]: INFO : Ignition 2.19.0 Dec 13 01:07:55.282365 ignition[1011]: INFO : Stage: umount Dec 13 01:07:55.278250 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:07:55.288495 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:55.288495 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:55.288495 ignition[1011]: INFO : umount: umount passed Dec 13 01:07:55.288495 ignition[1011]: INFO : Ignition finished successfully Dec 13 01:07:55.285429 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:07:55.285573 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:07:55.296571 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:07:55.296716 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:07:55.297409 systemd[1]: Stopped target network.target - Network. Dec 13 01:07:55.300892 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:07:55.300948 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:07:55.301952 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:07:55.301994 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:07:55.302332 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:07:55.302374 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:07:55.308038 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:07:55.308136 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:07:55.309247 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:07:55.311857 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:07:55.314961 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:07:55.321548 systemd-networkd[781]: eth0: DHCPv6 lease lost Dec 13 01:07:55.324060 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:07:55.325299 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:07:55.329270 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:07:55.329613 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:07:55.334550 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:07:55.335687 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:07:55.353581 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:07:55.355878 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:07:55.355955 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:07:55.360200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:07:55.361267 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:07:55.363690 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:07:55.364853 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:07:55.367379 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:07:55.368574 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:07:55.371551 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:07:55.383460 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:07:55.383638 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:07:55.386051 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:07:55.386215 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:07:55.387831 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:07:55.387906 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:07:55.389800 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:07:55.389839 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:07:55.392166 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:07:55.392218 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:07:55.396354 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:07:55.396406 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:07:55.399826 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:07:55.399881 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:07:55.418686 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:07:55.421366 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:07:55.421479 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:07:55.425586 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:07:55.426752 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:07:55.429795 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:07:55.429854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:07:55.434122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:07:55.434234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:55.438614 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:07:55.439930 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:07:55.457793 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:07:55.459051 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:07:55.461995 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:07:55.464379 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:07:55.465540 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:07:55.481831 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:07:55.489778 systemd[1]: Switching root. Dec 13 01:07:55.525268 systemd-journald[193]: Journal stopped Dec 13 01:07:56.549831 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:07:56.549909 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:07:56.549923 kernel: SELinux: policy capability open_perms=1 Dec 13 01:07:56.549935 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:07:56.549953 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:07:56.549964 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:07:56.549975 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:07:56.549989 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:07:56.550001 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:07:56.550012 kernel: audit: type=1403 audit(1734052075.837:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:07:56.550037 systemd[1]: Successfully loaded SELinux policy in 42.369ms. Dec 13 01:07:56.550057 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.235ms. Dec 13 01:07:56.550069 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:07:56.550081 systemd[1]: Detected virtualization kvm. Dec 13 01:07:56.550093 systemd[1]: Detected architecture x86-64. Dec 13 01:07:56.550104 systemd[1]: Detected first boot. Dec 13 01:07:56.550118 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:07:56.550130 zram_generator::config[1056]: No configuration found. Dec 13 01:07:56.550143 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:07:56.550155 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:07:56.550167 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:07:56.550185 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:07:56.550197 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:07:56.550210 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:07:56.550224 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:07:56.550236 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:07:56.550248 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:07:56.550260 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:07:56.550271 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:07:56.550283 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:07:56.550295 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:07:56.550307 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:07:56.550319 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:07:56.550333 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:07:56.550344 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:07:56.550357 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:07:56.550369 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:07:56.550380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:07:56.550392 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:07:56.550404 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:07:56.550416 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:07:56.550430 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:07:56.550455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:07:56.550474 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:07:56.550486 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:07:56.550498 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:07:56.550510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:07:56.550522 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:07:56.550533 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:07:56.550545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:07:56.550559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:07:56.550572 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:07:56.550584 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:07:56.550596 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:07:56.550608 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:07:56.550620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:56.550632 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:07:56.550643 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:07:56.550657 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:07:56.550670 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:07:56.550682 systemd[1]: Reached target machines.target - Containers. Dec 13 01:07:56.550693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:07:56.550705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:07:56.550717 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:07:56.550729 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:07:56.550741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:07:56.550759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:07:56.550773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:07:56.550785 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:07:56.550796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:07:56.550811 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:07:56.550823 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:07:56.550835 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:07:56.550847 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:07:56.550858 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:07:56.550872 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:07:56.550884 kernel: loop: module loaded Dec 13 01:07:56.550894 kernel: fuse: init (API version 7.39) Dec 13 01:07:56.550906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:07:56.550918 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:07:56.550929 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:07:56.550941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:07:56.550953 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:07:56.550964 systemd[1]: Stopped verity-setup.service. Dec 13 01:07:56.550977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:56.551009 systemd-journald[1130]: Collecting audit messages is disabled. Dec 13 01:07:56.551032 kernel: ACPI: bus type drm_connector registered Dec 13 01:07:56.551044 systemd-journald[1130]: Journal started Dec 13 01:07:56.551065 systemd-journald[1130]: Runtime Journal (/run/log/journal/ae87a35ce74a4e5f8b69593a5f5d61ba) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:07:56.335792 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:07:56.351331 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:07:56.351800 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:07:56.562513 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:07:56.564604 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:07:56.565347 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:07:56.566598 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:07:56.567722 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:07:56.568949 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:07:56.570181 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:07:56.571478 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:07:56.572935 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:07:56.574569 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:07:56.574750 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:07:56.576240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:07:56.576412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:07:56.578022 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:07:56.578241 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:07:56.579696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:07:56.579870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:07:56.581522 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:07:56.581701 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:07:56.583098 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:07:56.583274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:07:56.584703 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:07:56.586289 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:07:56.587852 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:07:56.603617 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:07:56.619603 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:07:56.622090 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:07:56.623223 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:07:56.623259 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:07:56.625252 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:07:56.627544 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:07:56.633601 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:07:56.634801 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:07:56.636514 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:07:56.638567 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:07:56.639779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:07:56.644551 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:07:56.645771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:07:56.650124 systemd-journald[1130]: Time spent on flushing to /var/log/journal/ae87a35ce74a4e5f8b69593a5f5d61ba is 14.672ms for 951 entries. Dec 13 01:07:56.650124 systemd-journald[1130]: System Journal (/var/log/journal/ae87a35ce74a4e5f8b69593a5f5d61ba) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:07:56.673629 systemd-journald[1130]: Received client request to flush runtime journal. Dec 13 01:07:56.649569 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:07:56.653075 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:07:56.657599 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:07:56.660654 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:07:56.662107 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:07:56.663424 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:07:56.664997 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:07:56.667480 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:07:56.674541 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:07:56.685884 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:07:56.691252 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:07:56.691648 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:07:56.693744 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:07:56.695575 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:07:56.706096 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Dec 13 01:07:56.706115 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Dec 13 01:07:56.710608 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:07:56.711282 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:07:56.713584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:07:56.722469 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:07:56.729832 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:07:56.731200 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:07:56.747480 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 01:07:56.756372 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:07:56.763618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:07:56.780469 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:07:56.781998 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 01:07:56.782020 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 01:07:56.787167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:07:56.815489 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:07:56.826480 kernel: loop4: detected capacity change from 0 to 210664 Dec 13 01:07:56.835585 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:07:56.847808 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:07:56.848715 (sd-merge)[1198]: Merged extensions into '/usr'. Dec 13 01:07:56.853635 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:07:56.853656 systemd[1]: Reloading... Dec 13 01:07:56.929479 zram_generator::config[1236]: No configuration found. Dec 13 01:07:56.967239 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:07:57.029296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:07:57.077947 systemd[1]: Reloading finished in 223 ms. Dec 13 01:07:57.115515 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:07:57.117104 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:07:57.131600 systemd[1]: Starting ensure-sysext.service... Dec 13 01:07:57.133391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:07:57.140639 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:07:57.140654 systemd[1]: Reloading... Dec 13 01:07:57.157251 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:07:57.157653 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:07:57.158829 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:07:57.159118 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 13 01:07:57.159193 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 13 01:07:57.162949 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:07:57.163041 systemd-tmpfiles[1263]: Skipping /boot Dec 13 01:07:57.177227 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:07:57.177389 systemd-tmpfiles[1263]: Skipping /boot Dec 13 01:07:57.187477 zram_generator::config[1290]: No configuration found. Dec 13 01:07:57.296786 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:07:57.345457 systemd[1]: Reloading finished in 204 ms. Dec 13 01:07:57.364615 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:07:57.377853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:07:57.385095 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:07:57.387819 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:07:57.390103 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:07:57.394829 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:07:57.400657 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:07:57.408649 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:07:57.416065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:07:57.421422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:57.421658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:07:57.423904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:07:57.427849 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:07:57.432821 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:07:57.433570 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Dec 13 01:07:57.434138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:07:57.434249 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:57.436079 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:07:57.438200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:07:57.439153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:07:57.441165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:07:57.441592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:07:57.444952 augenrules[1354]: No rules Dec 13 01:07:57.446582 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:07:57.448434 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:07:57.454968 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:07:57.455161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:07:57.459872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:57.460120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:07:57.471828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:07:57.478089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:07:57.481850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:07:57.491793 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:07:57.493387 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:57.494119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:07:57.496314 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:07:57.498123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:07:57.498333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:07:57.500923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:07:57.501141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:07:57.503702 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:07:57.511256 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:07:57.518563 systemd[1]: Finished ensure-sysext.service. Dec 13 01:07:57.524456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:57.524615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:07:57.533678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:07:57.537636 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:07:57.543603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:07:57.545955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:07:57.548357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:07:57.551724 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:07:57.555608 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:07:57.556745 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:07:57.556771 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:57.557303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:07:57.557540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:07:57.558864 systemd-resolved[1333]: Positive Trust Anchors: Dec 13 01:07:57.559249 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:07:57.559431 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:07:57.560195 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:07:57.560262 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:07:57.560945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:07:57.561299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:07:57.564476 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Dec 13 01:07:57.571462 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1376) Dec 13 01:07:57.570644 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:07:57.571615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:07:57.572071 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:07:57.572297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:07:57.573272 systemd-resolved[1333]: Defaulting to hostname 'linux'. Dec 13 01:07:57.576802 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:07:57.577416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:07:57.578890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:07:57.587484 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1382) Dec 13 01:07:57.628489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:07:57.636628 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:07:57.637779 systemd-networkd[1401]: lo: Link UP Dec 13 01:07:57.637793 systemd-networkd[1401]: lo: Gained carrier Dec 13 01:07:57.639427 systemd-networkd[1401]: Enumeration completed Dec 13 01:07:57.639622 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:07:57.640516 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:07:57.640525 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:07:57.641167 systemd-networkd[1401]: eth0: Link UP Dec 13 01:07:57.641177 systemd-networkd[1401]: eth0: Gained carrier Dec 13 01:07:57.641188 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:07:57.641609 systemd[1]: Reached target network.target - Network. Dec 13 01:07:57.646614 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:07:57.651689 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:07:57.651237 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:07:57.652875 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:07:57.655497 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:07:57.656171 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Dec 13 01:07:57.657658 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:07:57.657757 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2024-12-13 01:07:57.257992 UTC. Dec 13 01:07:57.659469 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:07:57.662141 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:07:57.671201 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:07:57.676225 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:07:57.677053 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:07:57.685663 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:07:57.717817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:07:57.719857 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:07:57.792527 kernel: kvm_amd: TSC scaling supported Dec 13 01:07:57.792671 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:07:57.792751 kernel: kvm_amd: Nested Paging enabled Dec 13 01:07:57.792785 kernel: kvm_amd: LBR virtualization supported Dec 13 01:07:57.792808 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:07:57.792829 kernel: kvm_amd: Virtual GIF supported Dec 13 01:07:57.808125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:57.811480 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:07:57.849900 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:07:57.862675 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:07:57.870786 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:07:57.900645 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:07:57.902238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:07:57.903381 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:07:57.904592 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:07:57.905878 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:07:57.907397 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:07:57.908645 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:07:57.910173 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:07:57.911429 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:07:57.911470 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:07:57.912380 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:07:57.914278 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:07:57.917223 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:07:57.930391 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:07:57.933077 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:07:57.934758 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:07:57.935939 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:07:57.936932 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:07:57.937944 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:07:57.937973 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:07:57.939031 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:07:57.941099 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:07:57.942086 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:07:57.944549 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:07:57.950533 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:07:57.951683 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:07:57.953656 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:07:57.955183 jq[1439]: false Dec 13 01:07:57.958538 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:07:57.962569 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:07:57.966616 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:07:57.975392 extend-filesystems[1440]: Found loop3 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found loop4 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found loop5 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found sr0 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found vda Dec 13 01:07:57.975392 extend-filesystems[1440]: Found vda1 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found vda2 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found vda3 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found usr Dec 13 01:07:57.975392 extend-filesystems[1440]: Found vda4 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found vda6 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found vda7 Dec 13 01:07:57.975392 extend-filesystems[1440]: Found vda9 Dec 13 01:07:57.975392 extend-filesystems[1440]: Checking size of /dev/vda9 Dec 13 01:07:57.975130 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:07:57.981231 dbus-daemon[1438]: [system] SELinux support is enabled Dec 13 01:07:57.976611 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:07:57.977028 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:07:57.982045 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:07:57.985027 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:07:57.992349 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:07:57.996173 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:07:58.001690 extend-filesystems[1440]: Resized partition /dev/vda9 Dec 13 01:07:58.003995 update_engine[1455]: I20241213 01:07:58.003484 1455 main.cc:92] Flatcar Update Engine starting Dec 13 01:07:58.013963 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:07:58.004947 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:07:58.005137 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:07:58.017513 update_engine[1455]: I20241213 01:07:58.016649 1455 update_check_scheduler.cc:74] Next update check in 11m54s Dec 13 01:07:58.005493 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:07:58.017652 jq[1457]: true Dec 13 01:07:58.005681 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:07:58.007781 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:07:58.007997 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:07:58.024300 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1377) Dec 13 01:07:58.024330 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:07:58.029137 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:07:58.029981 jq[1469]: true Dec 13 01:07:58.039639 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:07:58.041418 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:07:58.042644 systemd-logind[1448]: New seat seat0. Dec 13 01:07:58.043157 tar[1463]: linux-amd64/helm Dec 13 01:07:58.048886 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:07:58.050355 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:07:58.050382 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:07:58.053428 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:07:58.053477 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:07:58.058447 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:07:58.061979 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:07:58.063806 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:07:58.085280 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:07:58.085280 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:07:58.085280 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:07:58.097076 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Dec 13 01:07:58.086232 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:07:58.086857 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:07:58.104346 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:07:58.109771 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:07:58.111556 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:07:58.113492 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:07:58.134094 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:07:58.157797 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:07:58.164685 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:07:58.172722 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:07:58.172936 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:07:58.176756 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:07:58.190979 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:07:58.198880 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:07:58.201850 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:07:58.203113 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:07:58.221081 containerd[1470]: time="2024-12-13T01:07:58.221001674Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:07:58.243805 containerd[1470]: time="2024-12-13T01:07:58.243757545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245333 containerd[1470]: time="2024-12-13T01:07:58.245292861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245333 containerd[1470]: time="2024-12-13T01:07:58.245320378Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:07:58.245383 containerd[1470]: time="2024-12-13T01:07:58.245334989Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:07:58.245574 containerd[1470]: time="2024-12-13T01:07:58.245521378Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:07:58.245574 containerd[1470]: time="2024-12-13T01:07:58.245541310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245640 containerd[1470]: time="2024-12-13T01:07:58.245621721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245660 containerd[1470]: time="2024-12-13T01:07:58.245642871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245850 containerd[1470]: time="2024-12-13T01:07:58.245831107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245850 containerd[1470]: time="2024-12-13T01:07:58.245848316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245898 containerd[1470]: time="2024-12-13T01:07:58.245860985Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245898 containerd[1470]: time="2024-12-13T01:07:58.245870903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:58.245974 containerd[1470]: time="2024-12-13T01:07:58.245957863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:58.246202 containerd[1470]: time="2024-12-13T01:07:58.246177452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:58.246310 containerd[1470]: time="2024-12-13T01:07:58.246292235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:58.246310 containerd[1470]: time="2024-12-13T01:07:58.246307645Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:07:58.246418 containerd[1470]: time="2024-12-13T01:07:58.246402563Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:07:58.246532 containerd[1470]: time="2024-12-13T01:07:58.246471590Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:07:58.251828 containerd[1470]: time="2024-12-13T01:07:58.251786075Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:07:58.251860 containerd[1470]: time="2024-12-13T01:07:58.251846213Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:07:58.251879 containerd[1470]: time="2024-12-13T01:07:58.251866096Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:07:58.251897 containerd[1470]: time="2024-12-13T01:07:58.251881555Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:07:58.251925 containerd[1470]: time="2024-12-13T01:07:58.251895052Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:07:58.252084 containerd[1470]: time="2024-12-13T01:07:58.252061204Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:07:58.252336 containerd[1470]: time="2024-12-13T01:07:58.252317097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:07:58.252465 containerd[1470]: time="2024-12-13T01:07:58.252433011Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:07:58.252486 containerd[1470]: time="2024-12-13T01:07:58.252466802Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:07:58.252486 containerd[1470]: time="2024-12-13T01:07:58.252480137Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:07:58.252530 containerd[1470]: time="2024-12-13T01:07:58.252493625Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:07:58.252530 containerd[1470]: time="2024-12-13T01:07:58.252506855Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:07:58.252530 containerd[1470]: time="2024-12-13T01:07:58.252520734Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:07:58.252578 containerd[1470]: time="2024-12-13T01:07:58.252534078Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:07:58.252578 containerd[1470]: time="2024-12-13T01:07:58.252548242Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:07:58.252578 containerd[1470]: time="2024-12-13T01:07:58.252561386Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:07:58.252578 containerd[1470]: time="2024-12-13T01:07:58.252573217Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:07:58.252646 containerd[1470]: time="2024-12-13T01:07:58.252586210Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:07:58.252646 containerd[1470]: time="2024-12-13T01:07:58.252605599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252646 containerd[1470]: time="2024-12-13T01:07:58.252618497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252646 containerd[1470]: time="2024-12-13T01:07:58.252630776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252646 containerd[1470]: time="2024-12-13T01:07:58.252642721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252742 containerd[1470]: time="2024-12-13T01:07:58.252654590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252742 containerd[1470]: time="2024-12-13T01:07:58.252667582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252742 containerd[1470]: time="2024-12-13T01:07:58.252678615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252742 containerd[1470]: time="2024-12-13T01:07:58.252690303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252742 containerd[1470]: time="2024-12-13T01:07:58.252701897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252742 containerd[1470]: time="2024-12-13T01:07:58.252716174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252742 containerd[1470]: time="2024-12-13T01:07:58.252727834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252742 containerd[1470]: time="2024-12-13T01:07:58.252738571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252871 containerd[1470]: time="2024-12-13T01:07:58.252750059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252871 containerd[1470]: time="2024-12-13T01:07:58.252765879Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:07:58.252871 containerd[1470]: time="2024-12-13T01:07:58.252790837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252871 containerd[1470]: time="2024-12-13T01:07:58.252802706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252871 containerd[1470]: time="2024-12-13T01:07:58.252813738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:07:58.252871 containerd[1470]: time="2024-12-13T01:07:58.252863824Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:07:58.252972 containerd[1470]: time="2024-12-13T01:07:58.252880699Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:07:58.252972 containerd[1470]: time="2024-12-13T01:07:58.252891246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:07:58.252972 containerd[1470]: time="2024-12-13T01:07:58.252904657Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:07:58.252972 containerd[1470]: time="2024-12-13T01:07:58.252914318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.252972 containerd[1470]: time="2024-12-13T01:07:58.252930262Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:07:58.252972 containerd[1470]: time="2024-12-13T01:07:58.252944759Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:07:58.252972 containerd[1470]: time="2024-12-13T01:07:58.252954200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:07:58.253263 containerd[1470]: time="2024-12-13T01:07:58.253200812Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:07:58.253263 containerd[1470]: time="2024-12-13T01:07:58.253262663Z" level=info msg="Connect containerd service" Dec 13 01:07:58.253630 containerd[1470]: time="2024-12-13T01:07:58.253296091Z" level=info msg="using legacy CRI server" Dec 13 01:07:58.253630 containerd[1470]: time="2024-12-13T01:07:58.253303554Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:07:58.253630 containerd[1470]: time="2024-12-13T01:07:58.253417023Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:07:58.254024 containerd[1470]: time="2024-12-13T01:07:58.254000890Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:07:58.254228 containerd[1470]: time="2024-12-13T01:07:58.254138935Z" level=info msg="Start subscribing containerd event" Dec 13 01:07:58.254228 containerd[1470]: time="2024-12-13T01:07:58.254202689Z" level=info msg="Start recovering state" Dec 13 01:07:58.254341 containerd[1470]: time="2024-12-13T01:07:58.254323792Z" level=info msg="Start event monitor" Dec 13 01:07:58.254364 containerd[1470]: time="2024-12-13T01:07:58.254324943Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:07:58.254383 containerd[1470]: time="2024-12-13T01:07:58.254350205Z" level=info msg="Start snapshots syncer" Dec 13 01:07:58.254383 containerd[1470]: time="2024-12-13T01:07:58.254372583Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:07:58.254383 containerd[1470]: time="2024-12-13T01:07:58.254381206Z" level=info msg="Start streaming server" Dec 13 01:07:58.254462 containerd[1470]: time="2024-12-13T01:07:58.254397902Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:07:58.254597 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:07:58.254925 containerd[1470]: time="2024-12-13T01:07:58.254906250Z" level=info msg="containerd successfully booted in 0.036350s" Dec 13 01:07:58.425015 tar[1463]: linux-amd64/LICENSE Dec 13 01:07:58.425072 tar[1463]: linux-amd64/README.md Dec 13 01:07:58.444583 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:07:58.446658 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:07:58.448795 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:43538.service - OpenSSH per-connection server daemon (10.0.0.1:43538). Dec 13 01:07:58.487694 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 43538 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:58.489751 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:58.497957 systemd-logind[1448]: New session 1 of user core. Dec 13 01:07:58.499077 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:07:58.509628 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:07:58.521280 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:07:58.536657 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:07:58.540699 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:07:58.638362 systemd[1534]: Queued start job for default target default.target. Dec 13 01:07:58.652675 systemd[1534]: Created slice app.slice - User Application Slice. Dec 13 01:07:58.652698 systemd[1534]: Reached target paths.target - Paths. Dec 13 01:07:58.652711 systemd[1534]: Reached target timers.target - Timers. Dec 13 01:07:58.654222 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:07:58.665754 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:07:58.665902 systemd[1534]: Reached target sockets.target - Sockets. Dec 13 01:07:58.665922 systemd[1534]: Reached target basic.target - Basic System. Dec 13 01:07:58.665969 systemd[1534]: Reached target default.target - Main User Target. Dec 13 01:07:58.666005 systemd[1534]: Startup finished in 118ms. Dec 13 01:07:58.666312 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:07:58.668788 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:07:58.730613 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:39002.service - OpenSSH per-connection server daemon (10.0.0.1:39002). Dec 13 01:07:58.761311 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 39002 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:58.762775 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:58.766765 systemd-logind[1448]: New session 2 of user core. Dec 13 01:07:58.776609 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:07:58.829458 sshd[1545]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:58.839883 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:39002.service: Deactivated successfully. Dec 13 01:07:58.841543 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:07:58.842918 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:07:58.844045 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:39004.service - OpenSSH per-connection server daemon (10.0.0.1:39004). Dec 13 01:07:58.846060 systemd-logind[1448]: Removed session 2. Dec 13 01:07:58.874961 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 39004 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:58.876315 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:58.879789 systemd-logind[1448]: New session 3 of user core. Dec 13 01:07:58.889550 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:07:58.941345 sshd[1552]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:58.945558 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:39004.service: Deactivated successfully. Dec 13 01:07:58.947779 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:07:58.948371 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:07:58.949281 systemd-logind[1448]: Removed session 3. Dec 13 01:07:59.690738 systemd-networkd[1401]: eth0: Gained IPv6LL Dec 13 01:07:59.693862 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:07:59.695722 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:07:59.706969 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:07:59.709672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:07:59.711777 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:07:59.732324 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:07:59.732617 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:07:59.734320 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:07:59.737887 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:08:00.320979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:00.322637 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:08:00.323883 systemd[1]: Startup finished in 755ms (kernel) + 5.138s (initrd) + 4.528s (userspace) = 10.422s. Dec 13 01:08:00.336249 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:08:00.766207 kubelet[1580]: E1213 01:08:00.766076 1580 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:08:00.770161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:08:00.770372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:08:08.687206 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:56784.service - OpenSSH per-connection server daemon (10.0.0.1:56784). Dec 13 01:08:08.722384 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 56784 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:08.724078 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:08.727807 systemd-logind[1448]: New session 4 of user core. Dec 13 01:08:08.742673 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:08:08.798368 sshd[1594]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:08.813225 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:56784.service: Deactivated successfully. Dec 13 01:08:08.815062 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:08:08.816470 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:08:08.831825 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:56796.service - OpenSSH per-connection server daemon (10.0.0.1:56796). Dec 13 01:08:08.832915 systemd-logind[1448]: Removed session 4. Dec 13 01:08:08.860821 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:08.862557 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:08.866786 systemd-logind[1448]: New session 5 of user core. Dec 13 01:08:08.876681 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:08:08.926309 sshd[1601]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:08.947516 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:56796.service: Deactivated successfully. Dec 13 01:08:08.949414 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:08:08.951181 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:08:08.961840 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:56804.service - OpenSSH per-connection server daemon (10.0.0.1:56804). Dec 13 01:08:08.962868 systemd-logind[1448]: Removed session 5. Dec 13 01:08:08.991232 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 56804 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:08.992954 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:08.996994 systemd-logind[1448]: New session 6 of user core. Dec 13 01:08:09.006561 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:08:09.059711 sshd[1608]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:09.075149 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:56804.service: Deactivated successfully. Dec 13 01:08:09.076701 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:08:09.078209 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:08:09.079550 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:56808.service - OpenSSH per-connection server daemon (10.0.0.1:56808). Dec 13 01:08:09.080277 systemd-logind[1448]: Removed session 6. Dec 13 01:08:09.112958 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 56808 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:09.114834 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:09.119189 systemd-logind[1448]: New session 7 of user core. Dec 13 01:08:09.130719 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:08:09.192953 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:08:09.193287 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:08:09.209862 sudo[1618]: pam_unix(sudo:session): session closed for user root Dec 13 01:08:09.211744 sshd[1615]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:09.222521 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:56808.service: Deactivated successfully. Dec 13 01:08:09.224142 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:08:09.225600 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:08:09.232695 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:56816.service - OpenSSH per-connection server daemon (10.0.0.1:56816). Dec 13 01:08:09.233497 systemd-logind[1448]: Removed session 7. Dec 13 01:08:09.259466 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 56816 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:09.260873 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:09.264857 systemd-logind[1448]: New session 8 of user core. Dec 13 01:08:09.272629 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:08:09.327383 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:08:09.327813 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:08:09.331873 sudo[1627]: pam_unix(sudo:session): session closed for user root Dec 13 01:08:09.337589 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:08:09.337916 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:08:09.358653 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:08:09.360312 auditctl[1630]: No rules Dec 13 01:08:09.361466 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:08:09.361710 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:08:09.363506 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:08:09.395152 augenrules[1648]: No rules Dec 13 01:08:09.396902 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:08:09.398298 sudo[1626]: pam_unix(sudo:session): session closed for user root Dec 13 01:08:09.400282 sshd[1623]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:09.412046 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:56816.service: Deactivated successfully. Dec 13 01:08:09.413544 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:08:09.415140 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:08:09.425692 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:56828.service - OpenSSH per-connection server daemon (10.0.0.1:56828). Dec 13 01:08:09.426495 systemd-logind[1448]: Removed session 8. Dec 13 01:08:09.453570 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 56828 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:09.455099 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:09.459135 systemd-logind[1448]: New session 9 of user core. Dec 13 01:08:09.473610 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:08:09.526377 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:08:09.526805 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:08:09.815766 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:08:09.815857 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:08:10.090803 dockerd[1677]: time="2024-12-13T01:08:10.090649519Z" level=info msg="Starting up" Dec 13 01:08:10.251595 dockerd[1677]: time="2024-12-13T01:08:10.251526059Z" level=info msg="Loading containers: start." Dec 13 01:08:10.364460 kernel: Initializing XFRM netlink socket Dec 13 01:08:10.443057 systemd-networkd[1401]: docker0: Link UP Dec 13 01:08:10.469226 dockerd[1677]: time="2024-12-13T01:08:10.469168386Z" level=info msg="Loading containers: done." Dec 13 01:08:10.488042 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1945705759-merged.mount: Deactivated successfully. Dec 13 01:08:10.491084 dockerd[1677]: time="2024-12-13T01:08:10.491029353Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:08:10.491177 dockerd[1677]: time="2024-12-13T01:08:10.491149113Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:08:10.491319 dockerd[1677]: time="2024-12-13T01:08:10.491294877Z" level=info msg="Daemon has completed initialization" Dec 13 01:08:10.532903 dockerd[1677]: time="2024-12-13T01:08:10.532834889Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:08:10.533046 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:08:10.850613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:08:10.858655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:11.021545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:11.026615 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:08:11.073229 kubelet[1835]: E1213 01:08:11.073178 1835 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:08:11.080287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:08:11.080508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:08:11.497236 containerd[1470]: time="2024-12-13T01:08:11.497194261Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:08:12.709278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118740057.mount: Deactivated successfully. Dec 13 01:08:14.001134 containerd[1470]: time="2024-12-13T01:08:14.001070488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:14.001884 containerd[1470]: time="2024-12-13T01:08:14.001808338Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 01:08:14.002909 containerd[1470]: time="2024-12-13T01:08:14.002879075Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:14.005429 containerd[1470]: time="2024-12-13T01:08:14.005393623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:14.006563 containerd[1470]: time="2024-12-13T01:08:14.006536928Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.509303774s" Dec 13 01:08:14.006610 containerd[1470]: time="2024-12-13T01:08:14.006566358Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:08:14.029277 containerd[1470]: time="2024-12-13T01:08:14.029225938Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:08:16.488073 containerd[1470]: time="2024-12-13T01:08:16.487984470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:16.488685 containerd[1470]: time="2024-12-13T01:08:16.488660736Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 01:08:16.489920 containerd[1470]: time="2024-12-13T01:08:16.489884612Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:16.492687 containerd[1470]: time="2024-12-13T01:08:16.492654038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:16.493779 containerd[1470]: time="2024-12-13T01:08:16.493746731Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.464480282s" Dec 13 01:08:16.493816 containerd[1470]: time="2024-12-13T01:08:16.493784102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:08:16.516823 containerd[1470]: time="2024-12-13T01:08:16.516760020Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:08:17.940434 containerd[1470]: time="2024-12-13T01:08:17.940372786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:17.941139 containerd[1470]: time="2024-12-13T01:08:17.941096278Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 01:08:17.942161 containerd[1470]: time="2024-12-13T01:08:17.942129765Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:17.944660 containerd[1470]: time="2024-12-13T01:08:17.944619048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:17.945522 containerd[1470]: time="2024-12-13T01:08:17.945486092Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.428673713s" Dec 13 01:08:17.945522 containerd[1470]: time="2024-12-13T01:08:17.945519063Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:08:17.965873 containerd[1470]: time="2024-12-13T01:08:17.965841833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:08:19.790680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688977977.mount: Deactivated successfully. Dec 13 01:08:20.486544 containerd[1470]: time="2024-12-13T01:08:20.486433693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:20.487218 containerd[1470]: time="2024-12-13T01:08:20.487134861Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:08:20.488629 containerd[1470]: time="2024-12-13T01:08:20.488598728Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:20.491059 containerd[1470]: time="2024-12-13T01:08:20.491015020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:20.491753 containerd[1470]: time="2024-12-13T01:08:20.491716098Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.525836171s" Dec 13 01:08:20.491753 containerd[1470]: time="2024-12-13T01:08:20.491750132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:08:20.512939 containerd[1470]: time="2024-12-13T01:08:20.512890125Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:08:21.037638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4142656241.mount: Deactivated successfully. Dec 13 01:08:21.330902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:08:21.338620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:21.524569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:21.532857 (kubelet)[1967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:08:21.664901 kubelet[1967]: E1213 01:08:21.664694 1967 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:08:21.669034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:08:21.669249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:08:22.433148 containerd[1470]: time="2024-12-13T01:08:22.433098052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:22.433893 containerd[1470]: time="2024-12-13T01:08:22.433832371Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:08:22.434928 containerd[1470]: time="2024-12-13T01:08:22.434889851Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:22.437868 containerd[1470]: time="2024-12-13T01:08:22.437839485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:22.438984 containerd[1470]: time="2024-12-13T01:08:22.438930710Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.926006556s" Dec 13 01:08:22.438984 containerd[1470]: time="2024-12-13T01:08:22.438963215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:08:22.579472 containerd[1470]: time="2024-12-13T01:08:22.579405627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:08:24.589711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509327312.mount: Deactivated successfully. Dec 13 01:08:24.597825 containerd[1470]: time="2024-12-13T01:08:24.597773484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.602687 containerd[1470]: time="2024-12-13T01:08:24.602637012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:08:24.604480 containerd[1470]: time="2024-12-13T01:08:24.604454812Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.606876 containerd[1470]: time="2024-12-13T01:08:24.606816299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.607682 containerd[1470]: time="2024-12-13T01:08:24.607644417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.028174086s" Dec 13 01:08:24.607734 containerd[1470]: time="2024-12-13T01:08:24.607680809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:08:24.634247 containerd[1470]: time="2024-12-13T01:08:24.634208147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:08:25.407316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4194463990.mount: Deactivated successfully. Dec 13 01:08:27.640812 containerd[1470]: time="2024-12-13T01:08:27.640733091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:27.641696 containerd[1470]: time="2024-12-13T01:08:27.641637147Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 01:08:27.643214 containerd[1470]: time="2024-12-13T01:08:27.643178568Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:27.646820 containerd[1470]: time="2024-12-13T01:08:27.646753709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:27.651133 containerd[1470]: time="2024-12-13T01:08:27.649588472Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.015336345s" Dec 13 01:08:27.651133 containerd[1470]: time="2024-12-13T01:08:27.649647191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:08:29.866751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:29.881628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:29.909571 systemd[1]: Reloading requested from client PID 2153 ('systemctl') (unit session-9.scope)... Dec 13 01:08:29.909589 systemd[1]: Reloading... Dec 13 01:08:29.993464 zram_generator::config[2192]: No configuration found. Dec 13 01:08:30.310859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:08:30.388818 systemd[1]: Reloading finished in 478 ms. Dec 13 01:08:30.441851 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:08:30.441969 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:08:30.442261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:30.444973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:30.610749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:30.616800 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:08:30.659631 kubelet[2241]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:08:30.659631 kubelet[2241]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:08:30.659631 kubelet[2241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:08:30.659927 kubelet[2241]: I1213 01:08:30.659698 2241 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:08:30.978714 kubelet[2241]: I1213 01:08:30.978665 2241 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:08:30.978714 kubelet[2241]: I1213 01:08:30.978699 2241 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:08:30.978973 kubelet[2241]: I1213 01:08:30.978948 2241 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:08:30.994302 kubelet[2241]: I1213 01:08:30.994235 2241 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:08:30.994621 kubelet[2241]: E1213 01:08:30.994578 2241 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.006683 kubelet[2241]: I1213 01:08:31.006648 2241 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:08:31.007781 kubelet[2241]: I1213 01:08:31.007733 2241 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:08:31.007963 kubelet[2241]: I1213 01:08:31.007779 2241 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:08:31.008346 kubelet[2241]: I1213 01:08:31.008321 2241 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:08:31.008346 kubelet[2241]: I1213 01:08:31.008337 2241 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:08:31.008529 kubelet[2241]: I1213 01:08:31.008502 2241 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:08:31.009090 kubelet[2241]: I1213 01:08:31.009060 2241 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:08:31.009090 kubelet[2241]: I1213 01:08:31.009081 2241 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:08:31.009146 kubelet[2241]: I1213 01:08:31.009105 2241 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:08:31.009146 kubelet[2241]: I1213 01:08:31.009129 2241 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:08:31.012568 kubelet[2241]: W1213 01:08:31.012494 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.012568 kubelet[2241]: E1213 01:08:31.012562 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.012720 kubelet[2241]: W1213 01:08:31.012583 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.012720 kubelet[2241]: E1213 01:08:31.012609 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.013705 kubelet[2241]: I1213 01:08:31.013671 2241 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:08:31.014864 kubelet[2241]: I1213 01:08:31.014838 2241 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:08:31.014907 kubelet[2241]: W1213 01:08:31.014892 2241 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:08:31.015556 kubelet[2241]: I1213 01:08:31.015537 2241 server.go:1264] "Started kubelet" Dec 13 01:08:31.015633 kubelet[2241]: I1213 01:08:31.015613 2241 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:08:31.016559 kubelet[2241]: I1213 01:08:31.016537 2241 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:08:31.016991 kubelet[2241]: I1213 01:08:31.016928 2241 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:08:31.017903 kubelet[2241]: I1213 01:08:31.017316 2241 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:08:31.017903 kubelet[2241]: I1213 01:08:31.017783 2241 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:08:31.018937 kubelet[2241]: E1213 01:08:31.018921 2241 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:31.019045 kubelet[2241]: I1213 01:08:31.019031 2241 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:08:31.019204 kubelet[2241]: I1213 01:08:31.019192 2241 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:08:31.019293 kubelet[2241]: I1213 01:08:31.019282 2241 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:08:31.019964 kubelet[2241]: W1213 01:08:31.019712 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.019964 kubelet[2241]: E1213 01:08:31.019762 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.021307 kubelet[2241]: E1213 01:08:31.021258 2241 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:08:31.021739 kubelet[2241]: E1213 01:08:31.021546 2241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Dec 13 01:08:31.021901 kubelet[2241]: I1213 01:08:31.021861 2241 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:08:31.022235 kubelet[2241]: I1213 01:08:31.021948 2241 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:08:31.022963 kubelet[2241]: E1213 01:08:31.022860 2241 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097352318aa7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:08:31.015520893 +0000 UTC m=+0.394471310,LastTimestamp:2024-12-13 01:08:31.015520893 +0000 UTC m=+0.394471310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:08:31.024975 kubelet[2241]: I1213 01:08:31.024957 2241 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:08:31.037269 kubelet[2241]: I1213 01:08:31.037193 2241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:08:31.038833 kubelet[2241]: I1213 01:08:31.038794 2241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:08:31.038833 kubelet[2241]: I1213 01:08:31.038825 2241 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:08:31.038959 kubelet[2241]: I1213 01:08:31.038843 2241 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:08:31.038959 kubelet[2241]: E1213 01:08:31.038885 2241 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:08:31.040248 kubelet[2241]: W1213 01:08:31.039607 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.040248 kubelet[2241]: E1213 01:08:31.039666 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:31.040807 kubelet[2241]: I1213 01:08:31.040782 2241 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:08:31.040807 kubelet[2241]: I1213 01:08:31.040801 2241 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:08:31.040892 kubelet[2241]: I1213 01:08:31.040820 2241 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:08:31.097754 kubelet[2241]: E1213 01:08:31.097624 2241 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097352318aa7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:08:31.015520893 +0000 UTC m=+0.394471310,LastTimestamp:2024-12-13 01:08:31.015520893 +0000 UTC m=+0.394471310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:08:31.121104 kubelet[2241]: I1213 01:08:31.121066 2241 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:31.121645 kubelet[2241]: E1213 01:08:31.121583 2241 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Dec 13 01:08:31.139472 kubelet[2241]: E1213 01:08:31.139385 2241 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:08:31.222640 kubelet[2241]: E1213 01:08:31.222562 2241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Dec 13 01:08:31.294759 kubelet[2241]: I1213 01:08:31.294623 2241 policy_none.go:49] "None policy: Start" Dec 13 01:08:31.295498 kubelet[2241]: I1213 01:08:31.295407 2241 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:08:31.295498 kubelet[2241]: I1213 01:08:31.295459 2241 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:08:31.302746 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:08:31.316089 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:08:31.318889 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:08:31.323552 kubelet[2241]: I1213 01:08:31.323517 2241 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:31.323924 kubelet[2241]: E1213 01:08:31.323887 2241 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Dec 13 01:08:31.329292 kubelet[2241]: I1213 01:08:31.329269 2241 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:08:31.329536 kubelet[2241]: I1213 01:08:31.329498 2241 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:08:31.329655 kubelet[2241]: I1213 01:08:31.329635 2241 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:08:31.330418 kubelet[2241]: E1213 01:08:31.330375 2241 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:08:31.340489 kubelet[2241]: I1213 01:08:31.340400 2241 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:08:31.341414 kubelet[2241]: I1213 01:08:31.341381 2241 topology_manager.go:215] "Topology Admit Handler" podUID="a5af00fcdb5e9ea6d36778433aa4f221" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:08:31.342400 kubelet[2241]: I1213 01:08:31.342371 2241 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:08:31.348017 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 01:08:31.361104 systemd[1]: Created slice kubepods-burstable-poda5af00fcdb5e9ea6d36778433aa4f221.slice - libcontainer container kubepods-burstable-poda5af00fcdb5e9ea6d36778433aa4f221.slice. Dec 13 01:08:31.376926 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 01:08:31.421115 kubelet[2241]: I1213 01:08:31.421053 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:08:31.421115 kubelet[2241]: I1213 01:08:31.421100 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5af00fcdb5e9ea6d36778433aa4f221-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5af00fcdb5e9ea6d36778433aa4f221\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:31.421115 kubelet[2241]: I1213 01:08:31.421123 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5af00fcdb5e9ea6d36778433aa4f221-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5af00fcdb5e9ea6d36778433aa4f221\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:31.421303 kubelet[2241]: I1213 01:08:31.421176 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:31.421303 kubelet[2241]: I1213 01:08:31.421213 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:31.421303 kubelet[2241]: I1213 01:08:31.421254 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:31.421303 kubelet[2241]: I1213 01:08:31.421287 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5af00fcdb5e9ea6d36778433aa4f221-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a5af00fcdb5e9ea6d36778433aa4f221\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:31.421430 kubelet[2241]: I1213 01:08:31.421311 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:31.421430 kubelet[2241]: I1213 01:08:31.421330 2241 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:31.623939 kubelet[2241]: E1213 01:08:31.623803 2241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Dec 13 01:08:31.660084 kubelet[2241]: E1213 01:08:31.660055 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:31.660741 containerd[1470]: time="2024-12-13T01:08:31.660697912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:31.674916 kubelet[2241]: E1213 01:08:31.674880 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:31.675273 containerd[1470]: time="2024-12-13T01:08:31.675230104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a5af00fcdb5e9ea6d36778433aa4f221,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:31.679530 kubelet[2241]: E1213 01:08:31.679503 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:31.679814 containerd[1470]: time="2024-12-13T01:08:31.679787179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:31.725375 kubelet[2241]: I1213 01:08:31.725345 2241 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:31.725672 kubelet[2241]: E1213 01:08:31.725646 2241 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Dec 13 01:08:32.002992 kubelet[2241]: W1213 01:08:32.002899 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:32.002992 kubelet[2241]: E1213 01:08:32.002994 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:32.060071 kubelet[2241]: W1213 01:08:32.060029 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:32.060071 kubelet[2241]: E1213 01:08:32.060074 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:32.161157 kubelet[2241]: W1213 01:08:32.161089 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:32.161157 kubelet[2241]: E1213 01:08:32.161157 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:32.218508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997703902.mount: Deactivated successfully. Dec 13 01:08:32.226643 containerd[1470]: time="2024-12-13T01:08:32.226581074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:32.227679 containerd[1470]: time="2024-12-13T01:08:32.227617861Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:32.228607 containerd[1470]: time="2024-12-13T01:08:32.228512893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:08:32.229703 containerd[1470]: time="2024-12-13T01:08:32.229653427Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:32.230559 containerd[1470]: time="2024-12-13T01:08:32.230500198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:08:32.231349 containerd[1470]: time="2024-12-13T01:08:32.231306906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:08:32.232421 containerd[1470]: time="2024-12-13T01:08:32.232380418Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:32.237414 containerd[1470]: time="2024-12-13T01:08:32.237331916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:32.238413 containerd[1470]: time="2024-12-13T01:08:32.238358469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.072837ms" Dec 13 01:08:32.240132 containerd[1470]: time="2024-12-13T01:08:32.240065031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.27697ms" Dec 13 01:08:32.241774 containerd[1470]: time="2024-12-13T01:08:32.241714761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.863334ms" Dec 13 01:08:32.425058 kubelet[2241]: E1213 01:08:32.424888 2241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" Dec 13 01:08:32.527459 kubelet[2241]: I1213 01:08:32.527395 2241 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:32.527748 kubelet[2241]: E1213 01:08:32.527721 2241 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Dec 13 01:08:32.559475 kubelet[2241]: W1213 01:08:32.559399 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:32.559475 kubelet[2241]: E1213 01:08:32.559474 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:33.188762 kubelet[2241]: E1213 01:08:33.188729 2241 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:33.310270 containerd[1470]: time="2024-12-13T01:08:33.310150181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:33.310270 containerd[1470]: time="2024-12-13T01:08:33.310227716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:33.310270 containerd[1470]: time="2024-12-13T01:08:33.310242520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:33.311266 containerd[1470]: time="2024-12-13T01:08:33.310321900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:33.311266 containerd[1470]: time="2024-12-13T01:08:33.310797469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:33.311266 containerd[1470]: time="2024-12-13T01:08:33.310854710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:33.311266 containerd[1470]: time="2024-12-13T01:08:33.310877040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:33.311266 containerd[1470]: time="2024-12-13T01:08:33.310979352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:33.327303 systemd[1]: run-containerd-runc-k8s.io-1f73e3daa50237eddcf213e6e3ea2885f8c87e92c7724682c2afc40ab08a8fca-runc.dferat.mount: Deactivated successfully. Dec 13 01:08:33.338645 systemd[1]: Started cri-containerd-1f73e3daa50237eddcf213e6e3ea2885f8c87e92c7724682c2afc40ab08a8fca.scope - libcontainer container 1f73e3daa50237eddcf213e6e3ea2885f8c87e92c7724682c2afc40ab08a8fca. Dec 13 01:08:33.340310 systemd[1]: Started cri-containerd-df4a3bf579dd759f67d11061822f044ed0a9a13a382345ab4c6cf311d8938078.scope - libcontainer container df4a3bf579dd759f67d11061822f044ed0a9a13a382345ab4c6cf311d8938078. Dec 13 01:08:33.357162 containerd[1470]: time="2024-12-13T01:08:33.356963773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:33.357162 containerd[1470]: time="2024-12-13T01:08:33.357020882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:33.357162 containerd[1470]: time="2024-12-13T01:08:33.357034573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:33.357887 containerd[1470]: time="2024-12-13T01:08:33.357820647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:33.380674 systemd[1]: Started cri-containerd-23246e7fc2d50486c2cea93aad809eea01cd8fa321011ab835f31a92cdebf422.scope - libcontainer container 23246e7fc2d50486c2cea93aad809eea01cd8fa321011ab835f31a92cdebf422. Dec 13 01:08:33.382085 containerd[1470]: time="2024-12-13T01:08:33.382042983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f73e3daa50237eddcf213e6e3ea2885f8c87e92c7724682c2afc40ab08a8fca\"" Dec 13 01:08:33.382597 containerd[1470]: time="2024-12-13T01:08:33.382333833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a5af00fcdb5e9ea6d36778433aa4f221,Namespace:kube-system,Attempt:0,} returns sandbox id \"df4a3bf579dd759f67d11061822f044ed0a9a13a382345ab4c6cf311d8938078\"" Dec 13 01:08:33.383971 kubelet[2241]: E1213 01:08:33.383679 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:33.384178 kubelet[2241]: E1213 01:08:33.384128 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:33.387264 containerd[1470]: time="2024-12-13T01:08:33.387213767Z" level=info msg="CreateContainer within sandbox \"1f73e3daa50237eddcf213e6e3ea2885f8c87e92c7724682c2afc40ab08a8fca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:08:33.388176 containerd[1470]: time="2024-12-13T01:08:33.388063214Z" level=info msg="CreateContainer within sandbox \"df4a3bf579dd759f67d11061822f044ed0a9a13a382345ab4c6cf311d8938078\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:08:33.416227 containerd[1470]: time="2024-12-13T01:08:33.416187595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"23246e7fc2d50486c2cea93aad809eea01cd8fa321011ab835f31a92cdebf422\"" Dec 13 01:08:33.417029 kubelet[2241]: E1213 01:08:33.416996 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:33.419145 containerd[1470]: time="2024-12-13T01:08:33.419123277Z" level=info msg="CreateContainer within sandbox \"23246e7fc2d50486c2cea93aad809eea01cd8fa321011ab835f31a92cdebf422\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:08:33.791101 containerd[1470]: time="2024-12-13T01:08:33.791038163Z" level=info msg="CreateContainer within sandbox \"1f73e3daa50237eddcf213e6e3ea2885f8c87e92c7724682c2afc40ab08a8fca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dbfcfd966444c9ea22fc535aeefebbcbc4350d5615f19de2242e9ef4faf2f745\"" Dec 13 01:08:33.791767 containerd[1470]: time="2024-12-13T01:08:33.791735496Z" level=info msg="StartContainer for \"dbfcfd966444c9ea22fc535aeefebbcbc4350d5615f19de2242e9ef4faf2f745\"" Dec 13 01:08:33.795871 containerd[1470]: time="2024-12-13T01:08:33.795810132Z" level=info msg="CreateContainer within sandbox \"df4a3bf579dd759f67d11061822f044ed0a9a13a382345ab4c6cf311d8938078\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe4dc68100e64a2a63bdf84984d58cd7c26d0c36c74353f9d5c44d752158e929\"" Dec 13 01:08:33.796384 containerd[1470]: time="2024-12-13T01:08:33.796360020Z" level=info msg="StartContainer for \"fe4dc68100e64a2a63bdf84984d58cd7c26d0c36c74353f9d5c44d752158e929\"" Dec 13 01:08:33.798468 containerd[1470]: time="2024-12-13T01:08:33.798417459Z" level=info msg="CreateContainer within sandbox \"23246e7fc2d50486c2cea93aad809eea01cd8fa321011ab835f31a92cdebf422\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e122566e11431e54f6f05500c810239db8dcc12d43eebd665a50f54d25378e68\"" Dec 13 01:08:33.799992 containerd[1470]: time="2024-12-13T01:08:33.799231346Z" level=info msg="StartContainer for \"e122566e11431e54f6f05500c810239db8dcc12d43eebd665a50f54d25378e68\"" Dec 13 01:08:33.822637 systemd[1]: Started cri-containerd-dbfcfd966444c9ea22fc535aeefebbcbc4350d5615f19de2242e9ef4faf2f745.scope - libcontainer container dbfcfd966444c9ea22fc535aeefebbcbc4350d5615f19de2242e9ef4faf2f745. Dec 13 01:08:33.826968 systemd[1]: Started cri-containerd-e122566e11431e54f6f05500c810239db8dcc12d43eebd665a50f54d25378e68.scope - libcontainer container e122566e11431e54f6f05500c810239db8dcc12d43eebd665a50f54d25378e68. Dec 13 01:08:33.828731 systemd[1]: Started cri-containerd-fe4dc68100e64a2a63bdf84984d58cd7c26d0c36c74353f9d5c44d752158e929.scope - libcontainer container fe4dc68100e64a2a63bdf84984d58cd7c26d0c36c74353f9d5c44d752158e929. Dec 13 01:08:33.874334 containerd[1470]: time="2024-12-13T01:08:33.874149754Z" level=info msg="StartContainer for \"dbfcfd966444c9ea22fc535aeefebbcbc4350d5615f19de2242e9ef4faf2f745\" returns successfully" Dec 13 01:08:33.882039 containerd[1470]: time="2024-12-13T01:08:33.881395586Z" level=info msg="StartContainer for \"e122566e11431e54f6f05500c810239db8dcc12d43eebd665a50f54d25378e68\" returns successfully" Dec 13 01:08:33.882039 containerd[1470]: time="2024-12-13T01:08:33.881504053Z" level=info msg="StartContainer for \"fe4dc68100e64a2a63bdf84984d58cd7c26d0c36c74353f9d5c44d752158e929\" returns successfully" Dec 13 01:08:33.913557 kubelet[2241]: W1213 01:08:33.913413 2241 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:33.913743 kubelet[2241]: E1213 01:08:33.913725 2241 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Dec 13 01:08:34.050767 kubelet[2241]: E1213 01:08:34.050623 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:34.052375 kubelet[2241]: E1213 01:08:34.052228 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:34.054046 kubelet[2241]: E1213 01:08:34.054013 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:34.130266 kubelet[2241]: I1213 01:08:34.130230 2241 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:34.985362 kubelet[2241]: E1213 01:08:34.985303 2241 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:08:35.014680 kubelet[2241]: I1213 01:08:35.014647 2241 apiserver.go:52] "Watching apiserver" Dec 13 01:08:35.019555 kubelet[2241]: I1213 01:08:35.019524 2241 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:08:35.054803 kubelet[2241]: E1213 01:08:35.054767 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:35.064679 kubelet[2241]: I1213 01:08:35.064644 2241 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:08:35.298277 kubelet[2241]: E1213 01:08:35.298138 2241 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 01:08:35.298456 kubelet[2241]: E1213 01:08:35.298413 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:36.092202 kubelet[2241]: E1213 01:08:36.092144 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:37.056852 kubelet[2241]: E1213 01:08:37.056795 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:37.063071 systemd[1]: Reloading requested from client PID 2520 ('systemctl') (unit session-9.scope)... Dec 13 01:08:37.063089 systemd[1]: Reloading... Dec 13 01:08:37.145959 zram_generator::config[2559]: No configuration found. Dec 13 01:08:37.278725 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:08:37.399363 systemd[1]: Reloading finished in 335 ms. Dec 13 01:08:37.452785 kubelet[2241]: I1213 01:08:37.452742 2241 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:08:37.453227 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:37.478320 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:08:37.478710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:37.489768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:37.653584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:37.659684 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:08:37.710134 kubelet[2604]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:08:37.710134 kubelet[2604]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:08:37.710134 kubelet[2604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:08:37.710569 kubelet[2604]: I1213 01:08:37.710194 2604 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:08:37.715076 kubelet[2604]: I1213 01:08:37.715039 2604 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:08:37.715076 kubelet[2604]: I1213 01:08:37.715067 2604 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:08:37.715278 kubelet[2604]: I1213 01:08:37.715263 2604 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:08:37.716527 kubelet[2604]: I1213 01:08:37.716496 2604 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:08:37.717600 kubelet[2604]: I1213 01:08:37.717573 2604 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:08:37.728499 kubelet[2604]: I1213 01:08:37.728462 2604 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:08:37.728811 kubelet[2604]: I1213 01:08:37.728760 2604 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:08:37.729029 kubelet[2604]: I1213 01:08:37.728806 2604 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:08:37.729111 kubelet[2604]: I1213 01:08:37.729041 2604 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:08:37.729111 kubelet[2604]: I1213 01:08:37.729052 2604 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:08:37.729111 kubelet[2604]: I1213 01:08:37.729098 2604 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:08:37.729225 kubelet[2604]: I1213 01:08:37.729212 2604 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:08:37.729248 kubelet[2604]: I1213 01:08:37.729228 2604 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:08:37.729271 kubelet[2604]: I1213 01:08:37.729250 2604 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:08:37.729271 kubelet[2604]: I1213 01:08:37.729269 2604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:08:37.730107 kubelet[2604]: I1213 01:08:37.730080 2604 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:08:37.732468 kubelet[2604]: I1213 01:08:37.730291 2604 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:08:37.732468 kubelet[2604]: I1213 01:08:37.730753 2604 server.go:1264] "Started kubelet" Dec 13 01:08:37.732468 kubelet[2604]: I1213 01:08:37.731860 2604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:08:37.736484 kubelet[2604]: I1213 01:08:37.733659 2604 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:08:37.736484 kubelet[2604]: I1213 01:08:37.734581 2604 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:08:37.736484 kubelet[2604]: I1213 01:08:37.734714 2604 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:08:37.736484 kubelet[2604]: I1213 01:08:37.735108 2604 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:08:37.737665 kubelet[2604]: E1213 01:08:37.737640 2604 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:08:37.737823 kubelet[2604]: I1213 01:08:37.737766 2604 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:08:37.738515 kubelet[2604]: I1213 01:08:37.737861 2604 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:08:37.738515 kubelet[2604]: I1213 01:08:37.738018 2604 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:08:37.739193 kubelet[2604]: I1213 01:08:37.739162 2604 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:08:37.739280 kubelet[2604]: I1213 01:08:37.739255 2604 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:08:37.754137 kubelet[2604]: I1213 01:08:37.754077 2604 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:08:37.854694 kubelet[2604]: I1213 01:08:37.854638 2604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:08:37.856223 kubelet[2604]: I1213 01:08:37.856094 2604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:08:37.856223 kubelet[2604]: I1213 01:08:37.856165 2604 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:08:37.856223 kubelet[2604]: I1213 01:08:37.856191 2604 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:08:37.856470 kubelet[2604]: E1213 01:08:37.856357 2604 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:08:37.857178 kubelet[2604]: I1213 01:08:37.857148 2604 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:37.884703 kubelet[2604]: I1213 01:08:37.884665 2604 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:08:37.884703 kubelet[2604]: I1213 01:08:37.884687 2604 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:08:37.884703 kubelet[2604]: I1213 01:08:37.884705 2604 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:08:37.884899 kubelet[2604]: I1213 01:08:37.884842 2604 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:08:37.884899 kubelet[2604]: I1213 01:08:37.884852 2604 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:08:37.884899 kubelet[2604]: I1213 01:08:37.884869 2604 policy_none.go:49] "None policy: Start" Dec 13 01:08:37.885646 kubelet[2604]: I1213 01:08:37.885624 2604 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:08:37.885646 kubelet[2604]: I1213 01:08:37.885646 2604 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:08:37.885816 kubelet[2604]: I1213 01:08:37.885759 2604 state_mem.go:75] "Updated machine memory state" Dec 13 01:08:37.889246 kubelet[2604]: I1213 01:08:37.888697 2604 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:08:37.889246 kubelet[2604]: I1213 01:08:37.888758 2604 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:08:37.892124 kubelet[2604]: I1213 01:08:37.892087 2604 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:08:37.892392 kubelet[2604]: I1213 01:08:37.892353 2604 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:08:37.892759 kubelet[2604]: I1213 01:08:37.892579 2604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:08:37.957092 kubelet[2604]: I1213 01:08:37.957042 2604 topology_manager.go:215] "Topology Admit Handler" podUID="a5af00fcdb5e9ea6d36778433aa4f221" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:08:37.957199 kubelet[2604]: I1213 01:08:37.957165 2604 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:08:37.957258 kubelet[2604]: I1213 01:08:37.957242 2604 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:08:38.051696 kubelet[2604]: I1213 01:08:38.051647 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5af00fcdb5e9ea6d36778433aa4f221-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5af00fcdb5e9ea6d36778433aa4f221\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:38.051852 kubelet[2604]: I1213 01:08:38.051721 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:38.051852 kubelet[2604]: I1213 01:08:38.051762 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5af00fcdb5e9ea6d36778433aa4f221-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5af00fcdb5e9ea6d36778433aa4f221\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:38.051852 kubelet[2604]: I1213 01:08:38.051814 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5af00fcdb5e9ea6d36778433aa4f221-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a5af00fcdb5e9ea6d36778433aa4f221\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:38.051919 kubelet[2604]: I1213 01:08:38.051861 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:38.051951 kubelet[2604]: I1213 01:08:38.051919 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:38.051995 kubelet[2604]: I1213 01:08:38.051972 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:38.052057 kubelet[2604]: I1213 01:08:38.052016 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:38.052081 kubelet[2604]: I1213 01:08:38.052069 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:08:38.111235 kubelet[2604]: E1213 01:08:38.111177 2604 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:38.401129 kubelet[2604]: E1213 01:08:38.400904 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:38.401129 kubelet[2604]: E1213 01:08:38.400958 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:38.412644 kubelet[2604]: E1213 01:08:38.412598 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:38.730256 kubelet[2604]: I1213 01:08:38.730199 2604 apiserver.go:52] "Watching apiserver" Dec 13 01:08:38.751111 kubelet[2604]: I1213 01:08:38.751072 2604 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:08:38.869406 kubelet[2604]: E1213 01:08:38.869139 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:38.869406 kubelet[2604]: E1213 01:08:38.869320 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:38.946553 kubelet[2604]: E1213 01:08:38.946489 2604 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:38.947049 kubelet[2604]: I1213 01:08:38.946833 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.946821217 podStartE2EDuration="1.946821217s" podCreationTimestamp="2024-12-13 01:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:38.94655129 +0000 UTC m=+1.282228236" watchObservedRunningTime="2024-12-13 01:08:38.946821217 +0000 UTC m=+1.282498153" Dec 13 01:08:38.947049 kubelet[2604]: E1213 01:08:38.946939 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:39.461355 kubelet[2604]: I1213 01:08:39.461280 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.461257627 podStartE2EDuration="3.461257627s" podCreationTimestamp="2024-12-13 01:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:39.458577289 +0000 UTC m=+1.794254225" watchObservedRunningTime="2024-12-13 01:08:39.461257627 +0000 UTC m=+1.796934563" Dec 13 01:08:39.461864 kubelet[2604]: I1213 01:08:39.461470 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.4614659469999998 podStartE2EDuration="2.461465947s" podCreationTimestamp="2024-12-13 01:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:39.158409706 +0000 UTC m=+1.494086642" watchObservedRunningTime="2024-12-13 01:08:39.461465947 +0000 UTC m=+1.797142883" Dec 13 01:08:39.871053 kubelet[2604]: E1213 01:08:39.870923 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:40.459864 kubelet[2604]: E1213 01:08:40.459812 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:40.478598 kubelet[2604]: E1213 01:08:40.478560 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:41.098604 kubelet[2604]: E1213 01:08:41.098472 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:42.907390 update_engine[1455]: I20241213 01:08:42.907320 1455 update_attempter.cc:509] Updating boot flags... Dec 13 01:08:42.982482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2679) Dec 13 01:08:43.040509 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2680) Dec 13 01:08:43.063450 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2680) Dec 13 01:08:43.759519 sudo[1659]: pam_unix(sudo:session): session closed for user root Dec 13 01:08:43.761850 sshd[1656]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:43.764954 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:56828.service: Deactivated successfully. Dec 13 01:08:43.766806 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:08:43.766987 systemd[1]: session-9.scope: Consumed 4.690s CPU time, 189.4M memory peak, 0B memory swap peak. Dec 13 01:08:43.769268 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:08:43.770346 systemd-logind[1448]: Removed session 9. Dec 13 01:08:50.463645 kubelet[2604]: E1213 01:08:50.463604 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:50.481865 kubelet[2604]: E1213 01:08:50.481828 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:51.102741 kubelet[2604]: E1213 01:08:51.102702 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:52.889355 kubelet[2604]: I1213 01:08:52.889292 2604 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:08:52.890009 containerd[1470]: time="2024-12-13T01:08:52.889966819Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:08:52.890267 kubelet[2604]: I1213 01:08:52.890206 2604 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:08:53.450456 kubelet[2604]: I1213 01:08:53.450389 2604 topology_manager.go:215] "Topology Admit Handler" podUID="8da74c7a-ff14-42c6-93c7-43febe420f8c" podNamespace="kube-system" podName="kube-proxy-mqv6g" Dec 13 01:08:53.462297 systemd[1]: Created slice kubepods-besteffort-pod8da74c7a_ff14_42c6_93c7_43febe420f8c.slice - libcontainer container kubepods-besteffort-pod8da74c7a_ff14_42c6_93c7_43febe420f8c.slice. Dec 13 01:08:53.651656 kubelet[2604]: I1213 01:08:53.651606 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8da74c7a-ff14-42c6-93c7-43febe420f8c-kube-proxy\") pod \"kube-proxy-mqv6g\" (UID: \"8da74c7a-ff14-42c6-93c7-43febe420f8c\") " pod="kube-system/kube-proxy-mqv6g" Dec 13 01:08:53.651656 kubelet[2604]: I1213 01:08:53.651643 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8da74c7a-ff14-42c6-93c7-43febe420f8c-xtables-lock\") pod \"kube-proxy-mqv6g\" (UID: \"8da74c7a-ff14-42c6-93c7-43febe420f8c\") " pod="kube-system/kube-proxy-mqv6g" Dec 13 01:08:53.651656 kubelet[2604]: I1213 01:08:53.651665 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8da74c7a-ff14-42c6-93c7-43febe420f8c-lib-modules\") pod \"kube-proxy-mqv6g\" (UID: \"8da74c7a-ff14-42c6-93c7-43febe420f8c\") " pod="kube-system/kube-proxy-mqv6g" Dec 13 01:08:53.651853 kubelet[2604]: I1213 01:08:53.651686 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s9ff\" (UniqueName: \"kubernetes.io/projected/8da74c7a-ff14-42c6-93c7-43febe420f8c-kube-api-access-7s9ff\") pod \"kube-proxy-mqv6g\" (UID: \"8da74c7a-ff14-42c6-93c7-43febe420f8c\") " pod="kube-system/kube-proxy-mqv6g" Dec 13 01:08:53.950015 kubelet[2604]: I1213 01:08:53.949568 2604 topology_manager.go:215] "Topology Admit Handler" podUID="bdcae05c-dabf-4ed4-a113-3c19dc2f0c3a" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-km6xh" Dec 13 01:08:53.954672 kubelet[2604]: I1213 01:08:53.954553 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bdcae05c-dabf-4ed4-a113-3c19dc2f0c3a-var-lib-calico\") pod \"tigera-operator-7bc55997bb-km6xh\" (UID: \"bdcae05c-dabf-4ed4-a113-3c19dc2f0c3a\") " pod="tigera-operator/tigera-operator-7bc55997bb-km6xh" Dec 13 01:08:53.954672 kubelet[2604]: I1213 01:08:53.954611 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf2s8\" (UniqueName: \"kubernetes.io/projected/bdcae05c-dabf-4ed4-a113-3c19dc2f0c3a-kube-api-access-jf2s8\") pod \"tigera-operator-7bc55997bb-km6xh\" (UID: \"bdcae05c-dabf-4ed4-a113-3c19dc2f0c3a\") " pod="tigera-operator/tigera-operator-7bc55997bb-km6xh" Dec 13 01:08:53.959427 systemd[1]: Created slice kubepods-besteffort-podbdcae05c_dabf_4ed4_a113_3c19dc2f0c3a.slice - libcontainer container kubepods-besteffort-podbdcae05c_dabf_4ed4_a113_3c19dc2f0c3a.slice. Dec 13 01:08:54.071205 kubelet[2604]: E1213 01:08:54.071161 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:54.071846 containerd[1470]: time="2024-12-13T01:08:54.071803175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqv6g,Uid:8da74c7a-ff14-42c6-93c7-43febe420f8c,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:54.101617 containerd[1470]: time="2024-12-13T01:08:54.101479637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:54.101617 containerd[1470]: time="2024-12-13T01:08:54.101555120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:54.101617 containerd[1470]: time="2024-12-13T01:08:54.101567104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:54.101809 containerd[1470]: time="2024-12-13T01:08:54.101659219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:54.129999 systemd[1]: Started cri-containerd-0d2a36f9916793b144a4d1949c099dd29d47ecc091a3da88e520570f2ed9f3c8.scope - libcontainer container 0d2a36f9916793b144a4d1949c099dd29d47ecc091a3da88e520570f2ed9f3c8. Dec 13 01:08:54.152314 containerd[1470]: time="2024-12-13T01:08:54.152267860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqv6g,Uid:8da74c7a-ff14-42c6-93c7-43febe420f8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2a36f9916793b144a4d1949c099dd29d47ecc091a3da88e520570f2ed9f3c8\"" Dec 13 01:08:54.153039 kubelet[2604]: E1213 01:08:54.153015 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:54.155565 containerd[1470]: time="2024-12-13T01:08:54.155522168Z" level=info msg="CreateContainer within sandbox \"0d2a36f9916793b144a4d1949c099dd29d47ecc091a3da88e520570f2ed9f3c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:08:54.173047 containerd[1470]: time="2024-12-13T01:08:54.172999514Z" level=info msg="CreateContainer within sandbox \"0d2a36f9916793b144a4d1949c099dd29d47ecc091a3da88e520570f2ed9f3c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ca1b1406760566c4befc3141df51c957184283bbc0e5534ace6944a6d1e8cead\"" Dec 13 01:08:54.173515 containerd[1470]: time="2024-12-13T01:08:54.173486294Z" level=info msg="StartContainer for \"ca1b1406760566c4befc3141df51c957184283bbc0e5534ace6944a6d1e8cead\"" Dec 13 01:08:54.200564 systemd[1]: Started cri-containerd-ca1b1406760566c4befc3141df51c957184283bbc0e5534ace6944a6d1e8cead.scope - libcontainer container ca1b1406760566c4befc3141df51c957184283bbc0e5534ace6944a6d1e8cead. Dec 13 01:08:54.230023 containerd[1470]: time="2024-12-13T01:08:54.229970096Z" level=info msg="StartContainer for \"ca1b1406760566c4befc3141df51c957184283bbc0e5534ace6944a6d1e8cead\" returns successfully" Dec 13 01:08:54.263001 containerd[1470]: time="2024-12-13T01:08:54.262941959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-km6xh,Uid:bdcae05c-dabf-4ed4-a113-3c19dc2f0c3a,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:08:54.287846 containerd[1470]: time="2024-12-13T01:08:54.287751723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:54.287846 containerd[1470]: time="2024-12-13T01:08:54.287794860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:54.287846 containerd[1470]: time="2024-12-13T01:08:54.287804869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:54.288079 containerd[1470]: time="2024-12-13T01:08:54.287872546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:54.310598 systemd[1]: Started cri-containerd-6ef5896f5286171efb9970aebaf194e45d5b02dbc048e248c85b18d0c215436d.scope - libcontainer container 6ef5896f5286171efb9970aebaf194e45d5b02dbc048e248c85b18d0c215436d. Dec 13 01:08:54.347431 containerd[1470]: time="2024-12-13T01:08:54.347378408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-km6xh,Uid:bdcae05c-dabf-4ed4-a113-3c19dc2f0c3a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6ef5896f5286171efb9970aebaf194e45d5b02dbc048e248c85b18d0c215436d\"" Dec 13 01:08:54.348981 containerd[1470]: time="2024-12-13T01:08:54.348949515Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:08:54.894668 kubelet[2604]: E1213 01:08:54.894628 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:54.904227 systemd[1]: run-containerd-runc-k8s.io-0d2a36f9916793b144a4d1949c099dd29d47ecc091a3da88e520570f2ed9f3c8-runc.N1nSk9.mount: Deactivated successfully. Dec 13 01:08:54.905404 kubelet[2604]: I1213 01:08:54.904824 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mqv6g" podStartSLOduration=1.904703639 podStartE2EDuration="1.904703639s" podCreationTimestamp="2024-12-13 01:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:54.90423263 +0000 UTC m=+17.239909576" watchObservedRunningTime="2024-12-13 01:08:54.904703639 +0000 UTC m=+17.240380575" Dec 13 01:08:55.980485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299382313.mount: Deactivated successfully. Dec 13 01:08:56.894488 containerd[1470]: time="2024-12-13T01:08:56.894426418Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:56.895782 containerd[1470]: time="2024-12-13T01:08:56.895751345Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763685" Dec 13 01:08:56.897087 containerd[1470]: time="2024-12-13T01:08:56.897028274Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:56.900193 containerd[1470]: time="2024-12-13T01:08:56.900150934Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:56.901305 containerd[1470]: time="2024-12-13T01:08:56.900809935Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.551831732s" Dec 13 01:08:56.901305 containerd[1470]: time="2024-12-13T01:08:56.901065298Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:08:56.908770 containerd[1470]: time="2024-12-13T01:08:56.908727218Z" level=info msg="CreateContainer within sandbox \"6ef5896f5286171efb9970aebaf194e45d5b02dbc048e248c85b18d0c215436d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:08:56.919390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226714739.mount: Deactivated successfully. Dec 13 01:08:56.920989 containerd[1470]: time="2024-12-13T01:08:56.920948609Z" level=info msg="CreateContainer within sandbox \"6ef5896f5286171efb9970aebaf194e45d5b02dbc048e248c85b18d0c215436d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a42ed81cb0957eaefcf745fddc68aecc8aefe7dcda3507524eacbac5163f70c1\"" Dec 13 01:08:56.921739 containerd[1470]: time="2024-12-13T01:08:56.921505976Z" level=info msg="StartContainer for \"a42ed81cb0957eaefcf745fddc68aecc8aefe7dcda3507524eacbac5163f70c1\"" Dec 13 01:08:56.945641 systemd[1]: Started cri-containerd-a42ed81cb0957eaefcf745fddc68aecc8aefe7dcda3507524eacbac5163f70c1.scope - libcontainer container a42ed81cb0957eaefcf745fddc68aecc8aefe7dcda3507524eacbac5163f70c1. Dec 13 01:08:56.986349 containerd[1470]: time="2024-12-13T01:08:56.986294483Z" level=info msg="StartContainer for \"a42ed81cb0957eaefcf745fddc68aecc8aefe7dcda3507524eacbac5163f70c1\" returns successfully" Dec 13 01:08:57.909683 kubelet[2604]: I1213 01:08:57.909620 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-km6xh" podStartSLOduration=2.350934948 podStartE2EDuration="4.909601808s" podCreationTimestamp="2024-12-13 01:08:53 +0000 UTC" firstStartedPulling="2024-12-13 01:08:54.348550541 +0000 UTC m=+16.684227467" lastFinishedPulling="2024-12-13 01:08:56.907217391 +0000 UTC m=+19.242894327" observedRunningTime="2024-12-13 01:08:57.909172981 +0000 UTC m=+20.244849927" watchObservedRunningTime="2024-12-13 01:08:57.909601808 +0000 UTC m=+20.245278744" Dec 13 01:08:59.832428 kubelet[2604]: I1213 01:08:59.832366 2604 topology_manager.go:215] "Topology Admit Handler" podUID="df1beb0c-b72c-40aa-975c-b241cdf9c6f1" podNamespace="calico-system" podName="calico-typha-784c94f588-hsj8j" Dec 13 01:08:59.845098 systemd[1]: Created slice kubepods-besteffort-poddf1beb0c_b72c_40aa_975c_b241cdf9c6f1.slice - libcontainer container kubepods-besteffort-poddf1beb0c_b72c_40aa_975c_b241cdf9c6f1.slice. Dec 13 01:08:59.876817 kubelet[2604]: I1213 01:08:59.876278 2604 topology_manager.go:215] "Topology Admit Handler" podUID="a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7" podNamespace="calico-system" podName="calico-node-qcqsp" Dec 13 01:08:59.884574 systemd[1]: Created slice kubepods-besteffort-poda7f7e0f8_48cc_4f8a_8d89_a5689e36fcd7.slice - libcontainer container kubepods-besteffort-poda7f7e0f8_48cc_4f8a_8d89_a5689e36fcd7.slice. Dec 13 01:08:59.985058 kubelet[2604]: I1213 01:08:59.984621 2604 topology_manager.go:215] "Topology Admit Handler" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" podNamespace="calico-system" podName="csi-node-driver-ll64m" Dec 13 01:08:59.985058 kubelet[2604]: E1213 01:08:59.984948 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:08:59.996900 kubelet[2604]: I1213 01:08:59.996857 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-flexvol-driver-host\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.996900 kubelet[2604]: I1213 01:08:59.996904 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-cni-net-dir\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.996900 kubelet[2604]: I1213 01:08:59.996922 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7ddefbe4-94ce-41d5-835d-00042427ce7d-registration-dir\") pod \"csi-node-driver-ll64m\" (UID: \"7ddefbe4-94ce-41d5-835d-00042427ce7d\") " pod="calico-system/csi-node-driver-ll64m" Dec 13 01:08:59.997134 kubelet[2604]: I1213 01:08:59.996949 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-node-certs\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997134 kubelet[2604]: I1213 01:08:59.996966 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmhh\" (UniqueName: \"kubernetes.io/projected/df1beb0c-b72c-40aa-975c-b241cdf9c6f1-kube-api-access-hkmhh\") pod \"calico-typha-784c94f588-hsj8j\" (UID: \"df1beb0c-b72c-40aa-975c-b241cdf9c6f1\") " pod="calico-system/calico-typha-784c94f588-hsj8j" Dec 13 01:08:59.997134 kubelet[2604]: I1213 01:08:59.996982 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-lib-modules\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997134 kubelet[2604]: I1213 01:08:59.997012 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7ddefbe4-94ce-41d5-835d-00042427ce7d-socket-dir\") pod \"csi-node-driver-ll64m\" (UID: \"7ddefbe4-94ce-41d5-835d-00042427ce7d\") " pod="calico-system/csi-node-driver-ll64m" Dec 13 01:08:59.997134 kubelet[2604]: I1213 01:08:59.997054 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29rlx\" (UniqueName: \"kubernetes.io/projected/7ddefbe4-94ce-41d5-835d-00042427ce7d-kube-api-access-29rlx\") pod \"csi-node-driver-ll64m\" (UID: \"7ddefbe4-94ce-41d5-835d-00042427ce7d\") " pod="calico-system/csi-node-driver-ll64m" Dec 13 01:08:59.997291 kubelet[2604]: I1213 01:08:59.997087 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-cni-bin-dir\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997291 kubelet[2604]: I1213 01:08:59.997112 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-xtables-lock\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997291 kubelet[2604]: I1213 01:08:59.997138 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-cni-log-dir\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997291 kubelet[2604]: I1213 01:08:59.997164 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df1beb0c-b72c-40aa-975c-b241cdf9c6f1-tigera-ca-bundle\") pod \"calico-typha-784c94f588-hsj8j\" (UID: \"df1beb0c-b72c-40aa-975c-b241cdf9c6f1\") " pod="calico-system/calico-typha-784c94f588-hsj8j" Dec 13 01:08:59.997291 kubelet[2604]: I1213 01:08:59.997209 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzpn4\" (UniqueName: \"kubernetes.io/projected/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-kube-api-access-wzpn4\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997495 kubelet[2604]: I1213 01:08:59.997233 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7ddefbe4-94ce-41d5-835d-00042427ce7d-varrun\") pod \"csi-node-driver-ll64m\" (UID: \"7ddefbe4-94ce-41d5-835d-00042427ce7d\") " pod="calico-system/csi-node-driver-ll64m" Dec 13 01:08:59.997495 kubelet[2604]: I1213 01:08:59.997323 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ddefbe4-94ce-41d5-835d-00042427ce7d-kubelet-dir\") pod \"csi-node-driver-ll64m\" (UID: \"7ddefbe4-94ce-41d5-835d-00042427ce7d\") " pod="calico-system/csi-node-driver-ll64m" Dec 13 01:08:59.997495 kubelet[2604]: I1213 01:08:59.997466 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/df1beb0c-b72c-40aa-975c-b241cdf9c6f1-typha-certs\") pod \"calico-typha-784c94f588-hsj8j\" (UID: \"df1beb0c-b72c-40aa-975c-b241cdf9c6f1\") " pod="calico-system/calico-typha-784c94f588-hsj8j" Dec 13 01:08:59.997495 kubelet[2604]: I1213 01:08:59.997492 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-policysync\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997635 kubelet[2604]: I1213 01:08:59.997517 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-var-lib-calico\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997635 kubelet[2604]: I1213 01:08:59.997544 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-tigera-ca-bundle\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:08:59.997635 kubelet[2604]: I1213 01:08:59.997565 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7-var-run-calico\") pod \"calico-node-qcqsp\" (UID: \"a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7\") " pod="calico-system/calico-node-qcqsp" Dec 13 01:09:00.100432 kubelet[2604]: E1213 01:09:00.100203 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.100432 kubelet[2604]: W1213 01:09:00.100229 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.100432 kubelet[2604]: E1213 01:09:00.100266 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.100637 kubelet[2604]: E1213 01:09:00.100516 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.100637 kubelet[2604]: W1213 01:09:00.100527 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.100637 kubelet[2604]: E1213 01:09:00.100538 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.105340 kubelet[2604]: E1213 01:09:00.101162 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.105340 kubelet[2604]: W1213 01:09:00.101180 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.105340 kubelet[2604]: E1213 01:09:00.101199 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.108108 kubelet[2604]: E1213 01:09:00.107952 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.108108 kubelet[2604]: W1213 01:09:00.107977 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.108108 kubelet[2604]: E1213 01:09:00.108063 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.109129 kubelet[2604]: E1213 01:09:00.109093 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.109622 kubelet[2604]: W1213 01:09:00.109111 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.109714 kubelet[2604]: E1213 01:09:00.109687 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.111655 kubelet[2604]: E1213 01:09:00.111628 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.111655 kubelet[2604]: W1213 01:09:00.111645 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.111740 kubelet[2604]: E1213 01:09:00.111692 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.111902 kubelet[2604]: E1213 01:09:00.111873 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.111902 kubelet[2604]: W1213 01:09:00.111893 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.111986 kubelet[2604]: E1213 01:09:00.111952 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.114452 kubelet[2604]: E1213 01:09:00.114412 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.114524 kubelet[2604]: W1213 01:09:00.114433 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.114565 kubelet[2604]: E1213 01:09:00.114548 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.114785 kubelet[2604]: E1213 01:09:00.114764 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.114785 kubelet[2604]: W1213 01:09:00.114778 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.114869 kubelet[2604]: E1213 01:09:00.114847 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.115062 kubelet[2604]: E1213 01:09:00.115042 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.115062 kubelet[2604]: W1213 01:09:00.115056 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.115137 kubelet[2604]: E1213 01:09:00.115095 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.115329 kubelet[2604]: E1213 01:09:00.115310 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.115329 kubelet[2604]: W1213 01:09:00.115324 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.115404 kubelet[2604]: E1213 01:09:00.115362 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.115640 kubelet[2604]: E1213 01:09:00.115620 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.115640 kubelet[2604]: W1213 01:09:00.115635 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.115717 kubelet[2604]: E1213 01:09:00.115669 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.115892 kubelet[2604]: E1213 01:09:00.115872 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.115892 kubelet[2604]: W1213 01:09:00.115887 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.115983 kubelet[2604]: E1213 01:09:00.115957 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.116133 kubelet[2604]: E1213 01:09:00.116113 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.116133 kubelet[2604]: W1213 01:09:00.116128 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.116208 kubelet[2604]: E1213 01:09:00.116189 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.116371 kubelet[2604]: E1213 01:09:00.116351 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.116371 kubelet[2604]: W1213 01:09:00.116365 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.116462 kubelet[2604]: E1213 01:09:00.116426 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.117133 kubelet[2604]: E1213 01:09:00.116626 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.117133 kubelet[2604]: W1213 01:09:00.116638 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.117133 kubelet[2604]: E1213 01:09:00.116672 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.117133 kubelet[2604]: E1213 01:09:00.116865 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.117133 kubelet[2604]: W1213 01:09:00.116875 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.117133 kubelet[2604]: E1213 01:09:00.116894 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.117327 kubelet[2604]: E1213 01:09:00.117147 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.117327 kubelet[2604]: W1213 01:09:00.117159 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.117327 kubelet[2604]: E1213 01:09:00.117173 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.118325 kubelet[2604]: E1213 01:09:00.118302 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.118325 kubelet[2604]: W1213 01:09:00.118314 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.118325 kubelet[2604]: E1213 01:09:00.118327 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.118601 kubelet[2604]: E1213 01:09:00.118532 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.118601 kubelet[2604]: W1213 01:09:00.118542 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.118601 kubelet[2604]: E1213 01:09:00.118552 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.118791 kubelet[2604]: E1213 01:09:00.118750 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.118791 kubelet[2604]: W1213 01:09:00.118765 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.118791 kubelet[2604]: E1213 01:09:00.118772 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.118973 kubelet[2604]: E1213 01:09:00.118957 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.118973 kubelet[2604]: W1213 01:09:00.118968 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.118973 kubelet[2604]: E1213 01:09:00.118975 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.119316 kubelet[2604]: E1213 01:09:00.119298 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:00.119316 kubelet[2604]: W1213 01:09:00.119309 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:00.119316 kubelet[2604]: E1213 01:09:00.119319 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:00.151240 kubelet[2604]: E1213 01:09:00.151186 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:00.151982 containerd[1470]: time="2024-12-13T01:09:00.151921843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784c94f588-hsj8j,Uid:df1beb0c-b72c-40aa-975c-b241cdf9c6f1,Namespace:calico-system,Attempt:0,}" Dec 13 01:09:00.179806 containerd[1470]: time="2024-12-13T01:09:00.179696154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:00.179806 containerd[1470]: time="2024-12-13T01:09:00.179759039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:00.179806 containerd[1470]: time="2024-12-13T01:09:00.179774039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:00.180135 containerd[1470]: time="2024-12-13T01:09:00.180074577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:00.188853 kubelet[2604]: E1213 01:09:00.188822 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:00.189343 containerd[1470]: time="2024-12-13T01:09:00.189298826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qcqsp,Uid:a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7,Namespace:calico-system,Attempt:0,}" Dec 13 01:09:00.199641 systemd[1]: Started cri-containerd-1fb794904bfd56cdb8d99e683124155815627ec3a0cfa5d09384c79669f38597.scope - libcontainer container 1fb794904bfd56cdb8d99e683124155815627ec3a0cfa5d09384c79669f38597. Dec 13 01:09:00.218277 containerd[1470]: time="2024-12-13T01:09:00.218002866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:00.218277 containerd[1470]: time="2024-12-13T01:09:00.218063235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:00.218277 containerd[1470]: time="2024-12-13T01:09:00.218079347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:00.218277 containerd[1470]: time="2024-12-13T01:09:00.218173825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:00.241628 systemd[1]: Started cri-containerd-41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d.scope - libcontainer container 41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d. Dec 13 01:09:00.244375 containerd[1470]: time="2024-12-13T01:09:00.244296836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784c94f588-hsj8j,Uid:df1beb0c-b72c-40aa-975c-b241cdf9c6f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fb794904bfd56cdb8d99e683124155815627ec3a0cfa5d09384c79669f38597\"" Dec 13 01:09:00.246786 kubelet[2604]: E1213 01:09:00.246760 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:00.254179 containerd[1470]: time="2024-12-13T01:09:00.254127050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:09:00.268377 containerd[1470]: time="2024-12-13T01:09:00.268326957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qcqsp,Uid:a7f7e0f8-48cc-4f8a-8d89-a5689e36fcd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d\"" Dec 13 01:09:00.269010 kubelet[2604]: E1213 01:09:00.268987 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:01.927783 kubelet[2604]: E1213 01:09:01.927733 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:09:01.947844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666666232.mount: Deactivated successfully. Dec 13 01:09:02.263666 containerd[1470]: time="2024-12-13T01:09:02.263614317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:02.264521 containerd[1470]: time="2024-12-13T01:09:02.264484751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:09:02.265667 containerd[1470]: time="2024-12-13T01:09:02.265613605Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:02.267518 containerd[1470]: time="2024-12-13T01:09:02.267467366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:02.268045 containerd[1470]: time="2024-12-13T01:09:02.268014009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.013844174s" Dec 13 01:09:02.268075 containerd[1470]: time="2024-12-13T01:09:02.268042325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:09:02.270165 containerd[1470]: time="2024-12-13T01:09:02.270120318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:09:02.277763 containerd[1470]: time="2024-12-13T01:09:02.277720546Z" level=info msg="CreateContainer within sandbox \"1fb794904bfd56cdb8d99e683124155815627ec3a0cfa5d09384c79669f38597\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:09:02.292299 containerd[1470]: time="2024-12-13T01:09:02.292260023Z" level=info msg="CreateContainer within sandbox \"1fb794904bfd56cdb8d99e683124155815627ec3a0cfa5d09384c79669f38597\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"988c55066ebe9abb727293aa38fc43bab6bc19b3de41eb38f3b7a408116dce0e\"" Dec 13 01:09:02.292710 containerd[1470]: time="2024-12-13T01:09:02.292672991Z" level=info msg="StartContainer for \"988c55066ebe9abb727293aa38fc43bab6bc19b3de41eb38f3b7a408116dce0e\"" Dec 13 01:09:02.326595 systemd[1]: Started cri-containerd-988c55066ebe9abb727293aa38fc43bab6bc19b3de41eb38f3b7a408116dce0e.scope - libcontainer container 988c55066ebe9abb727293aa38fc43bab6bc19b3de41eb38f3b7a408116dce0e. Dec 13 01:09:02.365889 containerd[1470]: time="2024-12-13T01:09:02.365827839Z" level=info msg="StartContainer for \"988c55066ebe9abb727293aa38fc43bab6bc19b3de41eb38f3b7a408116dce0e\" returns successfully" Dec 13 01:09:02.928362 kubelet[2604]: E1213 01:09:02.928302 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:02.940128 kubelet[2604]: I1213 01:09:02.940072 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-784c94f588-hsj8j" podStartSLOduration=1.92038838 podStartE2EDuration="3.940056303s" podCreationTimestamp="2024-12-13 01:08:59 +0000 UTC" firstStartedPulling="2024-12-13 01:09:00.250255326 +0000 UTC m=+22.585932262" lastFinishedPulling="2024-12-13 01:09:02.269923249 +0000 UTC m=+24.605600185" observedRunningTime="2024-12-13 01:09:02.939714836 +0000 UTC m=+25.275391772" watchObservedRunningTime="2024-12-13 01:09:02.940056303 +0000 UTC m=+25.275733239" Dec 13 01:09:03.016299 kubelet[2604]: E1213 01:09:03.016244 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.016299 kubelet[2604]: W1213 01:09:03.016284 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.016299 kubelet[2604]: E1213 01:09:03.016310 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.016660 kubelet[2604]: E1213 01:09:03.016642 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.016660 kubelet[2604]: W1213 01:09:03.016658 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.016721 kubelet[2604]: E1213 01:09:03.016670 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.016923 kubelet[2604]: E1213 01:09:03.016906 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.016923 kubelet[2604]: W1213 01:09:03.016921 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.016981 kubelet[2604]: E1213 01:09:03.016932 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.017290 kubelet[2604]: E1213 01:09:03.017262 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.017290 kubelet[2604]: W1213 01:09:03.017278 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.017345 kubelet[2604]: E1213 01:09:03.017290 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.017575 kubelet[2604]: E1213 01:09:03.017551 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.017575 kubelet[2604]: W1213 01:09:03.017569 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.017636 kubelet[2604]: E1213 01:09:03.017581 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.017804 kubelet[2604]: E1213 01:09:03.017782 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.017804 kubelet[2604]: W1213 01:09:03.017796 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.017852 kubelet[2604]: E1213 01:09:03.017809 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.018050 kubelet[2604]: E1213 01:09:03.018034 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.018050 kubelet[2604]: W1213 01:09:03.018047 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.018105 kubelet[2604]: E1213 01:09:03.018058 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.018383 kubelet[2604]: E1213 01:09:03.018313 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.018383 kubelet[2604]: W1213 01:09:03.018331 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.018383 kubelet[2604]: E1213 01:09:03.018342 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.018654 kubelet[2604]: E1213 01:09:03.018637 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.018684 kubelet[2604]: W1213 01:09:03.018653 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.018684 kubelet[2604]: E1213 01:09:03.018665 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.018895 kubelet[2604]: E1213 01:09:03.018879 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.018895 kubelet[2604]: W1213 01:09:03.018893 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.018948 kubelet[2604]: E1213 01:09:03.018904 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.019144 kubelet[2604]: E1213 01:09:03.019129 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.019144 kubelet[2604]: W1213 01:09:03.019141 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.019203 kubelet[2604]: E1213 01:09:03.019150 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.019378 kubelet[2604]: E1213 01:09:03.019359 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.019378 kubelet[2604]: W1213 01:09:03.019374 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.019458 kubelet[2604]: E1213 01:09:03.019384 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.019657 kubelet[2604]: E1213 01:09:03.019641 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.019657 kubelet[2604]: W1213 01:09:03.019654 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.019710 kubelet[2604]: E1213 01:09:03.019665 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.019885 kubelet[2604]: E1213 01:09:03.019870 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.019885 kubelet[2604]: W1213 01:09:03.019883 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.019933 kubelet[2604]: E1213 01:09:03.019893 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.020136 kubelet[2604]: E1213 01:09:03.020119 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.020168 kubelet[2604]: W1213 01:09:03.020145 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.020168 kubelet[2604]: E1213 01:09:03.020156 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.020498 kubelet[2604]: E1213 01:09:03.020481 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.020498 kubelet[2604]: W1213 01:09:03.020495 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.020558 kubelet[2604]: E1213 01:09:03.020506 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.020768 kubelet[2604]: E1213 01:09:03.020752 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.020768 kubelet[2604]: W1213 01:09:03.020766 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.020818 kubelet[2604]: E1213 01:09:03.020782 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.021032 kubelet[2604]: E1213 01:09:03.021016 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.021032 kubelet[2604]: W1213 01:09:03.021030 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.021086 kubelet[2604]: E1213 01:09:03.021048 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.021345 kubelet[2604]: E1213 01:09:03.021314 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.021386 kubelet[2604]: W1213 01:09:03.021345 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.021386 kubelet[2604]: E1213 01:09:03.021372 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.021673 kubelet[2604]: E1213 01:09:03.021659 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.021673 kubelet[2604]: W1213 01:09:03.021670 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.021719 kubelet[2604]: E1213 01:09:03.021685 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.021909 kubelet[2604]: E1213 01:09:03.021894 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.021909 kubelet[2604]: W1213 01:09:03.021906 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.021956 kubelet[2604]: E1213 01:09:03.021920 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.022213 kubelet[2604]: E1213 01:09:03.022191 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.022213 kubelet[2604]: W1213 01:09:03.022203 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.022272 kubelet[2604]: E1213 01:09:03.022216 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.022482 kubelet[2604]: E1213 01:09:03.022426 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.022482 kubelet[2604]: W1213 01:09:03.022462 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.022542 kubelet[2604]: E1213 01:09:03.022503 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.022703 kubelet[2604]: E1213 01:09:03.022667 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.022703 kubelet[2604]: W1213 01:09:03.022692 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.022768 kubelet[2604]: E1213 01:09:03.022730 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.022977 kubelet[2604]: E1213 01:09:03.022954 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.022977 kubelet[2604]: W1213 01:09:03.022968 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.023028 kubelet[2604]: E1213 01:09:03.022983 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.023343 kubelet[2604]: E1213 01:09:03.023307 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.023343 kubelet[2604]: W1213 01:09:03.023333 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.023506 kubelet[2604]: E1213 01:09:03.023371 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.023688 kubelet[2604]: E1213 01:09:03.023672 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.023688 kubelet[2604]: W1213 01:09:03.023684 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.023767 kubelet[2604]: E1213 01:09:03.023697 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.024013 kubelet[2604]: E1213 01:09:03.023981 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.024013 kubelet[2604]: W1213 01:09:03.024001 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.024068 kubelet[2604]: E1213 01:09:03.024017 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.024248 kubelet[2604]: E1213 01:09:03.024232 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.024248 kubelet[2604]: W1213 01:09:03.024243 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.024304 kubelet[2604]: E1213 01:09:03.024259 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.024553 kubelet[2604]: E1213 01:09:03.024537 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.024553 kubelet[2604]: W1213 01:09:03.024551 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.024597 kubelet[2604]: E1213 01:09:03.024566 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.024783 kubelet[2604]: E1213 01:09:03.024767 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.024783 kubelet[2604]: W1213 01:09:03.024780 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.024837 kubelet[2604]: E1213 01:09:03.024795 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.025004 kubelet[2604]: E1213 01:09:03.024989 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.025004 kubelet[2604]: W1213 01:09:03.025000 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.025066 kubelet[2604]: E1213 01:09:03.025008 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.025420 kubelet[2604]: E1213 01:09:03.025405 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:09:03.025420 kubelet[2604]: W1213 01:09:03.025417 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:09:03.025493 kubelet[2604]: E1213 01:09:03.025426 2604 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:09:03.555161 containerd[1470]: time="2024-12-13T01:09:03.555105353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:03.555933 containerd[1470]: time="2024-12-13T01:09:03.555895535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:09:03.556989 containerd[1470]: time="2024-12-13T01:09:03.556940610Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:03.558847 containerd[1470]: time="2024-12-13T01:09:03.558812420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:03.559431 containerd[1470]: time="2024-12-13T01:09:03.559394741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.289239443s" Dec 13 01:09:03.559431 containerd[1470]: time="2024-12-13T01:09:03.559425592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:09:03.561597 containerd[1470]: time="2024-12-13T01:09:03.561545833Z" level=info msg="CreateContainer within sandbox \"41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:09:03.579434 containerd[1470]: time="2024-12-13T01:09:03.579381697Z" level=info msg="CreateContainer within sandbox \"41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e\"" Dec 13 01:09:03.579825 containerd[1470]: time="2024-12-13T01:09:03.579790035Z" level=info msg="StartContainer for \"917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e\"" Dec 13 01:09:03.615584 systemd[1]: Started cri-containerd-917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e.scope - libcontainer container 917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e. Dec 13 01:09:03.645111 containerd[1470]: time="2024-12-13T01:09:03.645069787Z" level=info msg="StartContainer for \"917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e\" returns successfully" Dec 13 01:09:03.655573 systemd[1]: cri-containerd-917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e.scope: Deactivated successfully. Dec 13 01:09:03.856800 kubelet[2604]: E1213 01:09:03.856632 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:09:03.931124 kubelet[2604]: I1213 01:09:03.931089 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:09:03.931654 kubelet[2604]: E1213 01:09:03.931392 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:03.931654 kubelet[2604]: E1213 01:09:03.931607 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:03.948110 containerd[1470]: time="2024-12-13T01:09:03.948047655Z" level=info msg="shim disconnected" id=917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e namespace=k8s.io Dec 13 01:09:03.948110 containerd[1470]: time="2024-12-13T01:09:03.948105540Z" level=warning msg="cleaning up after shim disconnected" id=917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e namespace=k8s.io Dec 13 01:09:03.948110 containerd[1470]: time="2024-12-13T01:09:03.948116211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:09:04.274852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917d658df31aac3136453a076cc0d07802124f84faa3d6b4512e0878eee7b95e-rootfs.mount: Deactivated successfully. Dec 13 01:09:04.934633 kubelet[2604]: E1213 01:09:04.934600 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:04.935638 containerd[1470]: time="2024-12-13T01:09:04.935601698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:09:05.857427 kubelet[2604]: E1213 01:09:05.857385 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:09:07.857423 kubelet[2604]: E1213 01:09:07.857359 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:09:09.857225 kubelet[2604]: E1213 01:09:09.857157 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:09:10.761710 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:40986.service - OpenSSH per-connection server daemon (10.0.0.1:40986). Dec 13 01:09:11.056564 sshd[3278]: Accepted publickey for core from 10.0.0.1 port 40986 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:11.058366 sshd[3278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:11.065515 systemd-logind[1448]: New session 10 of user core. Dec 13 01:09:11.074591 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:09:11.077767 containerd[1470]: time="2024-12-13T01:09:11.077718117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:11.078687 containerd[1470]: time="2024-12-13T01:09:11.078525387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:09:11.079723 containerd[1470]: time="2024-12-13T01:09:11.079679475Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:11.095236 containerd[1470]: time="2024-12-13T01:09:11.095160097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:11.095969 containerd[1470]: time="2024-12-13T01:09:11.095914503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.160266744s" Dec 13 01:09:11.096016 containerd[1470]: time="2024-12-13T01:09:11.095966314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:09:11.097983 containerd[1470]: time="2024-12-13T01:09:11.097952521Z" level=info msg="CreateContainer within sandbox \"41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:09:11.113574 containerd[1470]: time="2024-12-13T01:09:11.113527376Z" level=info msg="CreateContainer within sandbox \"41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68\"" Dec 13 01:09:11.114233 containerd[1470]: time="2024-12-13T01:09:11.114095958Z" level=info msg="StartContainer for \"3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68\"" Dec 13 01:09:11.148578 systemd[1]: Started cri-containerd-3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68.scope - libcontainer container 3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68. Dec 13 01:09:11.183043 containerd[1470]: time="2024-12-13T01:09:11.182984513Z" level=info msg="StartContainer for \"3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68\" returns successfully" Dec 13 01:09:11.216143 sshd[3278]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:11.221221 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:40986.service: Deactivated successfully. Dec 13 01:09:11.223254 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:09:11.223929 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:09:11.225126 systemd-logind[1448]: Removed session 10. Dec 13 01:09:11.857148 kubelet[2604]: E1213 01:09:11.857094 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:09:11.946278 kubelet[2604]: E1213 01:09:11.946249 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:12.948103 kubelet[2604]: E1213 01:09:12.948059 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:13.397676 containerd[1470]: time="2024-12-13T01:09:13.397518818Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:09:13.400255 systemd[1]: cri-containerd-3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68.scope: Deactivated successfully. Dec 13 01:09:13.415325 kubelet[2604]: I1213 01:09:13.415287 2604 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:09:13.428833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68-rootfs.mount: Deactivated successfully. Dec 13 01:09:13.467799 kubelet[2604]: I1213 01:09:13.467738 2604 topology_manager.go:215] "Topology Admit Handler" podUID="02b88a36-d2b9-4dbc-acd0-c7e3095fe180" podNamespace="calico-apiserver" podName="calico-apiserver-87f858bdd-n5nxj" Dec 13 01:09:13.505769 kubelet[2604]: I1213 01:09:13.468031 2604 topology_manager.go:215] "Topology Admit Handler" podUID="845ed845-9b07-4cfb-b5d6-9248233c4e24" podNamespace="calico-apiserver" podName="calico-apiserver-87f858bdd-ssrdn" Dec 13 01:09:13.505769 kubelet[2604]: I1213 01:09:13.468136 2604 topology_manager.go:215] "Topology Admit Handler" podUID="948371ee-1334-4913-b824-f4d34d66addf" podNamespace="calico-system" podName="calico-kube-controllers-79d779859c-vbhrm" Dec 13 01:09:13.505769 kubelet[2604]: I1213 01:09:13.468241 2604 topology_manager.go:215] "Topology Admit Handler" podUID="9ca75959-8db7-4b67-a9a1-33128730b6d4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hrntc" Dec 13 01:09:13.505769 kubelet[2604]: I1213 01:09:13.468349 2604 topology_manager.go:215] "Topology Admit Handler" podUID="ab737b6d-349c-469a-b31b-6775293b8eb1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jnh4m" Dec 13 01:09:13.505769 kubelet[2604]: I1213 01:09:13.493272 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca75959-8db7-4b67-a9a1-33128730b6d4-config-volume\") pod \"coredns-7db6d8ff4d-hrntc\" (UID: \"9ca75959-8db7-4b67-a9a1-33128730b6d4\") " pod="kube-system/coredns-7db6d8ff4d-hrntc" Dec 13 01:09:13.505769 kubelet[2604]: I1213 01:09:13.493299 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtf8q\" (UniqueName: \"kubernetes.io/projected/948371ee-1334-4913-b824-f4d34d66addf-kube-api-access-vtf8q\") pod \"calico-kube-controllers-79d779859c-vbhrm\" (UID: \"948371ee-1334-4913-b824-f4d34d66addf\") " pod="calico-system/calico-kube-controllers-79d779859c-vbhrm" Dec 13 01:09:13.505769 kubelet[2604]: I1213 01:09:13.493322 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2jcb\" (UniqueName: \"kubernetes.io/projected/9ca75959-8db7-4b67-a9a1-33128730b6d4-kube-api-access-f2jcb\") pod \"coredns-7db6d8ff4d-hrntc\" (UID: \"9ca75959-8db7-4b67-a9a1-33128730b6d4\") " pod="kube-system/coredns-7db6d8ff4d-hrntc" Dec 13 01:09:13.475164 systemd[1]: Created slice kubepods-besteffort-pod02b88a36_d2b9_4dbc_acd0_c7e3095fe180.slice - libcontainer container kubepods-besteffort-pod02b88a36_d2b9_4dbc_acd0_c7e3095fe180.slice. Dec 13 01:09:13.506168 kubelet[2604]: I1213 01:09:13.493343 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/948371ee-1334-4913-b824-f4d34d66addf-tigera-ca-bundle\") pod \"calico-kube-controllers-79d779859c-vbhrm\" (UID: \"948371ee-1334-4913-b824-f4d34d66addf\") " pod="calico-system/calico-kube-controllers-79d779859c-vbhrm" Dec 13 01:09:13.506168 kubelet[2604]: I1213 01:09:13.493359 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab737b6d-349c-469a-b31b-6775293b8eb1-config-volume\") pod \"coredns-7db6d8ff4d-jnh4m\" (UID: \"ab737b6d-349c-469a-b31b-6775293b8eb1\") " pod="kube-system/coredns-7db6d8ff4d-jnh4m" Dec 13 01:09:13.506168 kubelet[2604]: I1213 01:09:13.493378 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/845ed845-9b07-4cfb-b5d6-9248233c4e24-calico-apiserver-certs\") pod \"calico-apiserver-87f858bdd-ssrdn\" (UID: \"845ed845-9b07-4cfb-b5d6-9248233c4e24\") " pod="calico-apiserver/calico-apiserver-87f858bdd-ssrdn" Dec 13 01:09:13.506168 kubelet[2604]: I1213 01:09:13.493395 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24tj\" (UniqueName: \"kubernetes.io/projected/ab737b6d-349c-469a-b31b-6775293b8eb1-kube-api-access-t24tj\") pod \"coredns-7db6d8ff4d-jnh4m\" (UID: \"ab737b6d-349c-469a-b31b-6775293b8eb1\") " pod="kube-system/coredns-7db6d8ff4d-jnh4m" Dec 13 01:09:13.506168 kubelet[2604]: I1213 01:09:13.493415 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nns8c\" (UniqueName: \"kubernetes.io/projected/845ed845-9b07-4cfb-b5d6-9248233c4e24-kube-api-access-nns8c\") pod \"calico-apiserver-87f858bdd-ssrdn\" (UID: \"845ed845-9b07-4cfb-b5d6-9248233c4e24\") " pod="calico-apiserver/calico-apiserver-87f858bdd-ssrdn" Dec 13 01:09:13.481708 systemd[1]: Created slice kubepods-besteffort-pod845ed845_9b07_4cfb_b5d6_9248233c4e24.slice - libcontainer container kubepods-besteffort-pod845ed845_9b07_4cfb_b5d6_9248233c4e24.slice. Dec 13 01:09:13.486834 systemd[1]: Created slice kubepods-besteffort-pod948371ee_1334_4913_b824_f4d34d66addf.slice - libcontainer container kubepods-besteffort-pod948371ee_1334_4913_b824_f4d34d66addf.slice. Dec 13 01:09:13.494341 systemd[1]: Created slice kubepods-burstable-podab737b6d_349c_469a_b31b_6775293b8eb1.slice - libcontainer container kubepods-burstable-podab737b6d_349c_469a_b31b_6775293b8eb1.slice. Dec 13 01:09:13.499918 systemd[1]: Created slice kubepods-burstable-pod9ca75959_8db7_4b67_a9a1_33128730b6d4.slice - libcontainer container kubepods-burstable-pod9ca75959_8db7_4b67_a9a1_33128730b6d4.slice. Dec 13 01:09:13.593970 kubelet[2604]: I1213 01:09:13.593919 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/02b88a36-d2b9-4dbc-acd0-c7e3095fe180-calico-apiserver-certs\") pod \"calico-apiserver-87f858bdd-n5nxj\" (UID: \"02b88a36-d2b9-4dbc-acd0-c7e3095fe180\") " pod="calico-apiserver/calico-apiserver-87f858bdd-n5nxj" Dec 13 01:09:13.594109 kubelet[2604]: I1213 01:09:13.593983 2604 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k57sv\" (UniqueName: \"kubernetes.io/projected/02b88a36-d2b9-4dbc-acd0-c7e3095fe180-kube-api-access-k57sv\") pod \"calico-apiserver-87f858bdd-n5nxj\" (UID: \"02b88a36-d2b9-4dbc-acd0-c7e3095fe180\") " pod="calico-apiserver/calico-apiserver-87f858bdd-n5nxj" Dec 13 01:09:13.676327 kubelet[2604]: I1213 01:09:13.676224 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:09:13.676944 kubelet[2604]: E1213 01:09:13.676901 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:13.811052 kubelet[2604]: E1213 01:09:13.810912 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:13.815203 containerd[1470]: time="2024-12-13T01:09:13.815153327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jnh4m,Uid:ab737b6d-349c-469a-b31b-6775293b8eb1,Namespace:kube-system,Attempt:0,}" Dec 13 01:09:13.862173 systemd[1]: Created slice kubepods-besteffort-pod7ddefbe4_94ce_41d5_835d_00042427ce7d.slice - libcontainer container kubepods-besteffort-pod7ddefbe4_94ce_41d5_835d_00042427ce7d.slice. Dec 13 01:09:13.864111 containerd[1470]: time="2024-12-13T01:09:13.864064597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll64m,Uid:7ddefbe4-94ce-41d5-835d-00042427ce7d,Namespace:calico-system,Attempt:0,}" Dec 13 01:09:13.923302 containerd[1470]: time="2024-12-13T01:09:13.923240536Z" level=info msg="shim disconnected" id=3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68 namespace=k8s.io Dec 13 01:09:13.923302 containerd[1470]: time="2024-12-13T01:09:13.923289763Z" level=warning msg="cleaning up after shim disconnected" id=3ac5f8ba10544b304a198d14d21ffde19405329922f53de8be15d030676b1c68 namespace=k8s.io Dec 13 01:09:13.923302 containerd[1470]: time="2024-12-13T01:09:13.923298759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:09:13.952695 kubelet[2604]: E1213 01:09:13.951640 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:13.952695 kubelet[2604]: E1213 01:09:13.952275 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:13.957255 containerd[1470]: time="2024-12-13T01:09:13.957218059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:09:14.014325 containerd[1470]: time="2024-12-13T01:09:14.014276650Z" level=error msg="Failed to destroy network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.014890 containerd[1470]: time="2024-12-13T01:09:14.014868896Z" level=error msg="encountered an error cleaning up failed sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.014994 containerd[1470]: time="2024-12-13T01:09:14.014975373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll64m,Uid:7ddefbe4-94ce-41d5-835d-00042427ce7d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.015371 kubelet[2604]: E1213 01:09:14.015325 2604 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.015424 kubelet[2604]: E1213 01:09:14.015406 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll64m" Dec 13 01:09:14.015466 kubelet[2604]: E1213 01:09:14.015430 2604 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll64m" Dec 13 01:09:14.015561 kubelet[2604]: E1213 01:09:14.015491 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ll64m_calico-system(7ddefbe4-94ce-41d5-835d-00042427ce7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ll64m_calico-system(7ddefbe4-94ce-41d5-835d-00042427ce7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:09:14.015803 containerd[1470]: time="2024-12-13T01:09:14.015754955Z" level=error msg="Failed to destroy network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.016196 containerd[1470]: time="2024-12-13T01:09:14.016129205Z" level=error msg="encountered an error cleaning up failed sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.016196 containerd[1470]: time="2024-12-13T01:09:14.016166207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jnh4m,Uid:ab737b6d-349c-469a-b31b-6775293b8eb1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.016601 kubelet[2604]: E1213 01:09:14.016414 2604 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.016701 kubelet[2604]: E1213 01:09:14.016646 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jnh4m" Dec 13 01:09:14.016785 kubelet[2604]: E1213 01:09:14.016706 2604 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jnh4m" Dec 13 01:09:14.016884 kubelet[2604]: E1213 01:09:14.016824 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jnh4m_kube-system(ab737b6d-349c-469a-b31b-6775293b8eb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jnh4m_kube-system(ab737b6d-349c-469a-b31b-6775293b8eb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jnh4m" podUID="ab737b6d-349c-469a-b31b-6775293b8eb1" Dec 13 01:09:14.108182 kubelet[2604]: E1213 01:09:14.108133 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:14.108717 containerd[1470]: time="2024-12-13T01:09:14.108543454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f858bdd-n5nxj,Uid:02b88a36-d2b9-4dbc-acd0-c7e3095fe180,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:09:14.109065 containerd[1470]: time="2024-12-13T01:09:14.108890582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79d779859c-vbhrm,Uid:948371ee-1334-4913-b824-f4d34d66addf,Namespace:calico-system,Attempt:0,}" Dec 13 01:09:14.109198 containerd[1470]: time="2024-12-13T01:09:14.109017649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f858bdd-ssrdn,Uid:845ed845-9b07-4cfb-b5d6-9248233c4e24,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:09:14.109376 containerd[1470]: time="2024-12-13T01:09:14.109037207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hrntc,Uid:9ca75959-8db7-4b67-a9a1-33128730b6d4,Namespace:kube-system,Attempt:0,}" Dec 13 01:09:14.204541 containerd[1470]: time="2024-12-13T01:09:14.204251220Z" level=error msg="Failed to destroy network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.205936 containerd[1470]: time="2024-12-13T01:09:14.205754734Z" level=error msg="encountered an error cleaning up failed sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.205936 containerd[1470]: time="2024-12-13T01:09:14.205819209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f858bdd-n5nxj,Uid:02b88a36-d2b9-4dbc-acd0-c7e3095fe180,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.206206 kubelet[2604]: E1213 01:09:14.206126 2604 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.206346 kubelet[2604]: E1213 01:09:14.206207 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87f858bdd-n5nxj" Dec 13 01:09:14.206346 kubelet[2604]: E1213 01:09:14.206229 2604 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87f858bdd-n5nxj" Dec 13 01:09:14.206346 kubelet[2604]: E1213 01:09:14.206286 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87f858bdd-n5nxj_calico-apiserver(02b88a36-d2b9-4dbc-acd0-c7e3095fe180)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87f858bdd-n5nxj_calico-apiserver(02b88a36-d2b9-4dbc-acd0-c7e3095fe180)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87f858bdd-n5nxj" podUID="02b88a36-d2b9-4dbc-acd0-c7e3095fe180" Dec 13 01:09:14.218752 containerd[1470]: time="2024-12-13T01:09:14.218324504Z" level=error msg="Failed to destroy network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.219541 containerd[1470]: time="2024-12-13T01:09:14.219470551Z" level=error msg="encountered an error cleaning up failed sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.219632 containerd[1470]: time="2024-12-13T01:09:14.219517072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hrntc,Uid:9ca75959-8db7-4b67-a9a1-33128730b6d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.219826 containerd[1470]: time="2024-12-13T01:09:14.219767360Z" level=error msg="Failed to destroy network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.220113 kubelet[2604]: E1213 01:09:14.220015 2604 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.220559 kubelet[2604]: E1213 01:09:14.220204 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hrntc" Dec 13 01:09:14.220559 kubelet[2604]: E1213 01:09:14.220228 2604 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hrntc" Dec 13 01:09:14.220559 kubelet[2604]: E1213 01:09:14.220278 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hrntc_kube-system(9ca75959-8db7-4b67-a9a1-33128730b6d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hrntc_kube-system(9ca75959-8db7-4b67-a9a1-33128730b6d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hrntc" podUID="9ca75959-8db7-4b67-a9a1-33128730b6d4" Dec 13 01:09:14.220727 containerd[1470]: time="2024-12-13T01:09:14.220466053Z" level=error msg="encountered an error cleaning up failed sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.220727 containerd[1470]: time="2024-12-13T01:09:14.220539647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79d779859c-vbhrm,Uid:948371ee-1334-4913-b824-f4d34d66addf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.220790 kubelet[2604]: E1213 01:09:14.220745 2604 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.220790 kubelet[2604]: E1213 01:09:14.220776 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79d779859c-vbhrm" Dec 13 01:09:14.220844 kubelet[2604]: E1213 01:09:14.220796 2604 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79d779859c-vbhrm" Dec 13 01:09:14.220873 kubelet[2604]: E1213 01:09:14.220828 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79d779859c-vbhrm_calico-system(948371ee-1334-4913-b824-f4d34d66addf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79d779859c-vbhrm_calico-system(948371ee-1334-4913-b824-f4d34d66addf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79d779859c-vbhrm" podUID="948371ee-1334-4913-b824-f4d34d66addf" Dec 13 01:09:14.225905 containerd[1470]: time="2024-12-13T01:09:14.225860411Z" level=error msg="Failed to destroy network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.226314 containerd[1470]: time="2024-12-13T01:09:14.226281132Z" level=error msg="encountered an error cleaning up failed sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.226373 containerd[1470]: time="2024-12-13T01:09:14.226343944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f858bdd-ssrdn,Uid:845ed845-9b07-4cfb-b5d6-9248233c4e24,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.226592 kubelet[2604]: E1213 01:09:14.226545 2604 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:14.226629 kubelet[2604]: E1213 01:09:14.226611 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87f858bdd-ssrdn" Dec 13 01:09:14.226660 kubelet[2604]: E1213 01:09:14.226634 2604 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87f858bdd-ssrdn" Dec 13 01:09:14.226707 kubelet[2604]: E1213 01:09:14.226681 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87f858bdd-ssrdn_calico-apiserver(845ed845-9b07-4cfb-b5d6-9248233c4e24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87f858bdd-ssrdn_calico-apiserver(845ed845-9b07-4cfb-b5d6-9248233c4e24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87f858bdd-ssrdn" podUID="845ed845-9b07-4cfb-b5d6-9248233c4e24" Dec 13 01:09:14.953357 kubelet[2604]: I1213 01:09:14.953321 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:14.954200 kubelet[2604]: I1213 01:09:14.954170 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:14.955336 kubelet[2604]: I1213 01:09:14.955306 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:14.955854 containerd[1470]: time="2024-12-13T01:09:14.955808731Z" level=info msg="StopPodSandbox for \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\"" Dec 13 01:09:14.956136 containerd[1470]: time="2024-12-13T01:09:14.955965567Z" level=info msg="Ensure that sandbox 9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5 in task-service has been cleanup successfully" Dec 13 01:09:14.956964 containerd[1470]: time="2024-12-13T01:09:14.956891234Z" level=info msg="StopPodSandbox for \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\"" Dec 13 01:09:14.957100 containerd[1470]: time="2024-12-13T01:09:14.957009375Z" level=info msg="Ensure that sandbox 39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336 in task-service has been cleanup successfully" Dec 13 01:09:14.957851 kubelet[2604]: I1213 01:09:14.957585 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:14.958114 containerd[1470]: time="2024-12-13T01:09:14.958090434Z" level=info msg="StopPodSandbox for \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\"" Dec 13 01:09:14.958235 containerd[1470]: time="2024-12-13T01:09:14.958216370Z" level=info msg="Ensure that sandbox 7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb in task-service has been cleanup successfully" Dec 13 01:09:14.958262 containerd[1470]: time="2024-12-13T01:09:14.958232351Z" level=info msg="StopPodSandbox for \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\"" Dec 13 01:09:14.958498 containerd[1470]: time="2024-12-13T01:09:14.958392563Z" level=info msg="Ensure that sandbox 499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d in task-service has been cleanup successfully" Dec 13 01:09:14.960082 kubelet[2604]: I1213 01:09:14.959984 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:14.961461 containerd[1470]: time="2024-12-13T01:09:14.961002297Z" level=info msg="StopPodSandbox for \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\"" Dec 13 01:09:14.961461 containerd[1470]: time="2024-12-13T01:09:14.961192578Z" level=info msg="Ensure that sandbox 9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c in task-service has been cleanup successfully" Dec 13 01:09:14.963206 kubelet[2604]: I1213 01:09:14.963183 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:14.964786 containerd[1470]: time="2024-12-13T01:09:14.964741073Z" level=info msg="StopPodSandbox for \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\"" Dec 13 01:09:14.965562 containerd[1470]: time="2024-12-13T01:09:14.964975070Z" level=info msg="Ensure that sandbox ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff in task-service has been cleanup successfully" Dec 13 01:09:15.022003 containerd[1470]: time="2024-12-13T01:09:15.021668537Z" level=error msg="StopPodSandbox for \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\" failed" error="failed to destroy network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:15.022979 containerd[1470]: time="2024-12-13T01:09:15.022914878Z" level=error msg="StopPodSandbox for \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\" failed" error="failed to destroy network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:15.023402 kubelet[2604]: E1213 01:09:15.023127 2604 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:15.023402 kubelet[2604]: E1213 01:09:15.023202 2604 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c"} Dec 13 01:09:15.023402 kubelet[2604]: E1213 01:09:15.023263 2604 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"845ed845-9b07-4cfb-b5d6-9248233c4e24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:09:15.023402 kubelet[2604]: E1213 01:09:15.023285 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"845ed845-9b07-4cfb-b5d6-9248233c4e24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87f858bdd-ssrdn" podUID="845ed845-9b07-4cfb-b5d6-9248233c4e24" Dec 13 01:09:15.023624 containerd[1470]: time="2024-12-13T01:09:15.023351800Z" level=error msg="StopPodSandbox for \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\" failed" error="failed to destroy network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:15.023653 kubelet[2604]: E1213 01:09:15.023316 2604 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:15.023653 kubelet[2604]: E1213 01:09:15.023330 2604 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb"} Dec 13 01:09:15.023653 kubelet[2604]: E1213 01:09:15.023356 2604 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02b88a36-d2b9-4dbc-acd0-c7e3095fe180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:09:15.023653 kubelet[2604]: E1213 01:09:15.023376 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02b88a36-d2b9-4dbc-acd0-c7e3095fe180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87f858bdd-n5nxj" podUID="02b88a36-d2b9-4dbc-acd0-c7e3095fe180" Dec 13 01:09:15.024004 kubelet[2604]: E1213 01:09:15.023865 2604 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:15.024004 kubelet[2604]: E1213 01:09:15.023935 2604 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5"} Dec 13 01:09:15.024004 kubelet[2604]: E1213 01:09:15.023955 2604 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ddefbe4-94ce-41d5-835d-00042427ce7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:09:15.024004 kubelet[2604]: E1213 01:09:15.023972 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ddefbe4-94ce-41d5-835d-00042427ce7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ll64m" podUID="7ddefbe4-94ce-41d5-835d-00042427ce7d" Dec 13 01:09:15.024995 containerd[1470]: time="2024-12-13T01:09:15.024951269Z" level=error msg="StopPodSandbox for \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\" failed" error="failed to destroy network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:15.025235 kubelet[2604]: E1213 01:09:15.025183 2604 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:15.025336 kubelet[2604]: E1213 01:09:15.025304 2604 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336"} Dec 13 01:09:15.025336 kubelet[2604]: E1213 01:09:15.025332 2604 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"948371ee-1334-4913-b824-f4d34d66addf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:09:15.025501 kubelet[2604]: E1213 01:09:15.025350 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"948371ee-1334-4913-b824-f4d34d66addf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79d779859c-vbhrm" podUID="948371ee-1334-4913-b824-f4d34d66addf" Dec 13 01:09:15.025657 containerd[1470]: time="2024-12-13T01:09:15.025628519Z" level=error msg="StopPodSandbox for \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\" failed" error="failed to destroy network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:15.025766 kubelet[2604]: E1213 01:09:15.025733 2604 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:15.025799 kubelet[2604]: E1213 01:09:15.025769 2604 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff"} Dec 13 01:09:15.025799 kubelet[2604]: E1213 01:09:15.025787 2604 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab737b6d-349c-469a-b31b-6775293b8eb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:09:15.025876 kubelet[2604]: E1213 01:09:15.025803 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab737b6d-349c-469a-b31b-6775293b8eb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jnh4m" podUID="ab737b6d-349c-469a-b31b-6775293b8eb1" Dec 13 01:09:15.026270 containerd[1470]: time="2024-12-13T01:09:15.026230744Z" level=error msg="StopPodSandbox for \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\" failed" error="failed to destroy network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:09:15.026429 kubelet[2604]: E1213 01:09:15.026384 2604 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:15.026478 kubelet[2604]: E1213 01:09:15.026450 2604 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d"} Dec 13 01:09:15.026507 kubelet[2604]: E1213 01:09:15.026494 2604 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ca75959-8db7-4b67-a9a1-33128730b6d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:09:15.026550 kubelet[2604]: E1213 01:09:15.026519 2604 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ca75959-8db7-4b67-a9a1-33128730b6d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hrntc" podUID="9ca75959-8db7-4b67-a9a1-33128730b6d4" Dec 13 01:09:16.228355 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:40998.service - OpenSSH per-connection server daemon (10.0.0.1:40998). Dec 13 01:09:16.279460 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 40998 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:16.281157 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:16.285513 systemd-logind[1448]: New session 11 of user core. Dec 13 01:09:16.295587 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:09:16.559745 sshd[3735]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:16.563929 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:40998.service: Deactivated successfully. Dec 13 01:09:16.566039 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:09:16.566719 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:09:16.567639 systemd-logind[1448]: Removed session 11. Dec 13 01:09:18.662387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233076783.mount: Deactivated successfully. Dec 13 01:09:19.496374 containerd[1470]: time="2024-12-13T01:09:19.496303611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:19.500083 containerd[1470]: time="2024-12-13T01:09:19.500025730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:09:19.501269 containerd[1470]: time="2024-12-13T01:09:19.501231545Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:19.513804 containerd[1470]: time="2024-12-13T01:09:19.513746084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:19.514287 containerd[1470]: time="2024-12-13T01:09:19.514231307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.556966668s" Dec 13 01:09:19.514319 containerd[1470]: time="2024-12-13T01:09:19.514286915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:09:19.521969 containerd[1470]: time="2024-12-13T01:09:19.521922986Z" level=info msg="CreateContainer within sandbox \"41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:09:19.542584 containerd[1470]: time="2024-12-13T01:09:19.542542698Z" level=info msg="CreateContainer within sandbox \"41fc69009de5bf04db8d5fa810e26fa1a6a488aabcbe327a136b22e6fbeeee7d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"435df01998c8061cce1dfccd3a76b7b5cac3025c8a1ae0364bd620a329682928\"" Dec 13 01:09:19.543121 containerd[1470]: time="2024-12-13T01:09:19.542991140Z" level=info msg="StartContainer for \"435df01998c8061cce1dfccd3a76b7b5cac3025c8a1ae0364bd620a329682928\"" Dec 13 01:09:19.611657 systemd[1]: Started cri-containerd-435df01998c8061cce1dfccd3a76b7b5cac3025c8a1ae0364bd620a329682928.scope - libcontainer container 435df01998c8061cce1dfccd3a76b7b5cac3025c8a1ae0364bd620a329682928. Dec 13 01:09:19.650551 containerd[1470]: time="2024-12-13T01:09:19.649811738Z" level=info msg="StartContainer for \"435df01998c8061cce1dfccd3a76b7b5cac3025c8a1ae0364bd620a329682928\" returns successfully" Dec 13 01:09:19.716465 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:09:19.716607 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:09:19.976019 kubelet[2604]: E1213 01:09:19.975977 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:19.990126 kubelet[2604]: I1213 01:09:19.989093 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qcqsp" podStartSLOduration=1.744891275 podStartE2EDuration="20.989078224s" podCreationTimestamp="2024-12-13 01:08:59 +0000 UTC" firstStartedPulling="2024-12-13 01:09:00.270930401 +0000 UTC m=+22.606607337" lastFinishedPulling="2024-12-13 01:09:19.51511735 +0000 UTC m=+41.850794286" observedRunningTime="2024-12-13 01:09:19.988854729 +0000 UTC m=+42.324531665" watchObservedRunningTime="2024-12-13 01:09:19.989078224 +0000 UTC m=+42.324755160" Dec 13 01:09:20.977893 kubelet[2604]: E1213 01:09:20.977848 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:21.175467 kernel: bpftool[3993]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:09:21.421433 systemd-networkd[1401]: vxlan.calico: Link UP Dec 13 01:09:21.421528 systemd-networkd[1401]: vxlan.calico: Gained carrier Dec 13 01:09:21.585763 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:33754.service - OpenSSH per-connection server daemon (10.0.0.1:33754). Dec 13 01:09:21.620976 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 33754 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:21.621720 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:21.625608 systemd-logind[1448]: New session 12 of user core. Dec 13 01:09:21.634711 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:09:21.796263 sshd[4035]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:21.809351 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:33754.service: Deactivated successfully. Dec 13 01:09:21.811149 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:09:21.812904 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:09:21.821688 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Dec 13 01:09:21.822671 systemd-logind[1448]: Removed session 12. Dec 13 01:09:21.851937 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:21.853469 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:21.857076 systemd-logind[1448]: New session 13 of user core. Dec 13 01:09:21.865554 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:09:22.088355 sshd[4080]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:22.098182 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:33768.service: Deactivated successfully. Dec 13 01:09:22.099908 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:09:22.101658 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:09:22.110671 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:33774.service - OpenSSH per-connection server daemon (10.0.0.1:33774). Dec 13 01:09:22.111674 systemd-logind[1448]: Removed session 13. Dec 13 01:09:22.143367 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 33774 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:22.145109 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:22.149709 systemd-logind[1448]: New session 14 of user core. Dec 13 01:09:22.162627 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:09:22.280630 sshd[4095]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:22.286356 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:33774.service: Deactivated successfully. Dec 13 01:09:22.288459 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:09:22.290703 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:09:22.291933 systemd-logind[1448]: Removed session 14. Dec 13 01:09:23.274621 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Dec 13 01:09:25.857770 containerd[1470]: time="2024-12-13T01:09:25.857727706Z" level=info msg="StopPodSandbox for \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\"" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.904 [INFO][4130] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.904 [INFO][4130] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" iface="eth0" netns="/var/run/netns/cni-4832217d-9672-c5cf-3aed-f267d92e191f" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.904 [INFO][4130] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" iface="eth0" netns="/var/run/netns/cni-4832217d-9672-c5cf-3aed-f267d92e191f" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.905 [INFO][4130] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" iface="eth0" netns="/var/run/netns/cni-4832217d-9672-c5cf-3aed-f267d92e191f" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.905 [INFO][4130] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.905 [INFO][4130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.963 [INFO][4137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.963 [INFO][4137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.964 [INFO][4137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.971 [WARNING][4137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.971 [INFO][4137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.972 [INFO][4137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:25.977727 containerd[1470]: 2024-12-13 01:09:25.975 [INFO][4130] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:25.980361 systemd[1]: run-netns-cni\x2d4832217d\x2d9672\x2dc5cf\x2d3aed\x2df267d92e191f.mount: Deactivated successfully. Dec 13 01:09:25.980714 containerd[1470]: time="2024-12-13T01:09:25.980561554Z" level=info msg="TearDown network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\" successfully" Dec 13 01:09:25.980714 containerd[1470]: time="2024-12-13T01:09:25.980589137Z" level=info msg="StopPodSandbox for \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\" returns successfully" Dec 13 01:09:25.981122 kubelet[2604]: E1213 01:09:25.981083 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:25.981716 containerd[1470]: time="2024-12-13T01:09:25.981676095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jnh4m,Uid:ab737b6d-349c-469a-b31b-6775293b8eb1,Namespace:kube-system,Attempt:1,}" Dec 13 01:09:26.502752 systemd-networkd[1401]: calibef87ffe9dc: Link UP Dec 13 01:09:26.503391 systemd-networkd[1401]: calibef87ffe9dc: Gained carrier Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.414 [INFO][4145] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0 coredns-7db6d8ff4d- kube-system ab737b6d-349c-469a-b31b-6775293b8eb1 882 0 2024-12-13 01:08:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-jnh4m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibef87ffe9dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jnh4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jnh4m-" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.414 [INFO][4145] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jnh4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.443 [INFO][4158] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" HandleID="k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.450 [INFO][4158] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" HandleID="k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005c9f00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-jnh4m", "timestamp":"2024-12-13 01:09:26.443597578 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.451 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.451 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.451 [INFO][4158] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.452 [INFO][4158] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.457 [INFO][4158] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.460 [INFO][4158] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.462 [INFO][4158] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.463 [INFO][4158] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.463 [INFO][4158] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.465 [INFO][4158] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.477 [INFO][4158] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.493 [INFO][4158] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.493 [INFO][4158] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" host="localhost" Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.494 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:26.517262 containerd[1470]: 2024-12-13 01:09:26.494 [INFO][4158] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" HandleID="k8s-pod-network.4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:26.518175 containerd[1470]: 2024-12-13 01:09:26.498 [INFO][4145] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jnh4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ab737b6d-349c-469a-b31b-6775293b8eb1", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-jnh4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibef87ffe9dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:26.518175 containerd[1470]: 2024-12-13 01:09:26.498 [INFO][4145] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jnh4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:26.518175 containerd[1470]: 2024-12-13 01:09:26.498 [INFO][4145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibef87ffe9dc ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jnh4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:26.518175 containerd[1470]: 2024-12-13 01:09:26.503 [INFO][4145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jnh4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:26.518175 containerd[1470]: 2024-12-13 01:09:26.503 [INFO][4145] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jnh4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ab737b6d-349c-469a-b31b-6775293b8eb1", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c", Pod:"coredns-7db6d8ff4d-jnh4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibef87ffe9dc", MAC:"fa:ae:eb:01:50:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:26.518175 containerd[1470]: 2024-12-13 01:09:26.512 [INFO][4145] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jnh4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:26.550189 containerd[1470]: time="2024-12-13T01:09:26.550018107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:26.550189 containerd[1470]: time="2024-12-13T01:09:26.550084446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:26.550189 containerd[1470]: time="2024-12-13T01:09:26.550114735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:26.550496 containerd[1470]: time="2024-12-13T01:09:26.550243524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:26.573582 systemd[1]: Started cri-containerd-4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c.scope - libcontainer container 4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c. Dec 13 01:09:26.585559 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:09:26.614951 containerd[1470]: time="2024-12-13T01:09:26.614903384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jnh4m,Uid:ab737b6d-349c-469a-b31b-6775293b8eb1,Namespace:kube-system,Attempt:1,} returns sandbox id \"4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c\"" Dec 13 01:09:26.615845 kubelet[2604]: E1213 01:09:26.615808 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:26.618422 containerd[1470]: time="2024-12-13T01:09:26.618377519Z" level=info msg="CreateContainer within sandbox \"4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:09:26.676971 containerd[1470]: time="2024-12-13T01:09:26.676915020Z" level=info msg="CreateContainer within sandbox \"4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"250f81414026af99b6c7dba8b56a1070cc14ec42d3f7e6174856eaadbd90bcde\"" Dec 13 01:09:26.678354 containerd[1470]: time="2024-12-13T01:09:26.678305426Z" level=info msg="StartContainer for \"250f81414026af99b6c7dba8b56a1070cc14ec42d3f7e6174856eaadbd90bcde\"" Dec 13 01:09:26.705703 systemd[1]: Started cri-containerd-250f81414026af99b6c7dba8b56a1070cc14ec42d3f7e6174856eaadbd90bcde.scope - libcontainer container 250f81414026af99b6c7dba8b56a1070cc14ec42d3f7e6174856eaadbd90bcde. Dec 13 01:09:26.736216 containerd[1470]: time="2024-12-13T01:09:26.736174311Z" level=info msg="StartContainer for \"250f81414026af99b6c7dba8b56a1070cc14ec42d3f7e6174856eaadbd90bcde\" returns successfully" Dec 13 01:09:27.000136 kubelet[2604]: E1213 01:09:27.000097 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:27.009066 kubelet[2604]: I1213 01:09:27.008797 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jnh4m" podStartSLOduration=34.008780697 podStartE2EDuration="34.008780697s" podCreationTimestamp="2024-12-13 01:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:09:27.008459474 +0000 UTC m=+49.344136440" watchObservedRunningTime="2024-12-13 01:09:27.008780697 +0000 UTC m=+49.344457643" Dec 13 01:09:27.294479 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:33788.service - OpenSSH per-connection server daemon (10.0.0.1:33788). Dec 13 01:09:27.332794 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 33788 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:27.334519 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:27.339079 systemd-logind[1448]: New session 15 of user core. Dec 13 01:09:27.348589 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:09:27.459724 sshd[4275]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:27.463203 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:33788.service: Deactivated successfully. Dec 13 01:09:27.465192 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:09:27.465858 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:09:27.466817 systemd-logind[1448]: Removed session 15. Dec 13 01:09:27.690591 systemd-networkd[1401]: calibef87ffe9dc: Gained IPv6LL Dec 13 01:09:28.001351 kubelet[2604]: E1213 01:09:28.001273 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:28.857662 containerd[1470]: time="2024-12-13T01:09:28.857578822Z" level=info msg="StopPodSandbox for \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\"" Dec 13 01:09:28.857662 containerd[1470]: time="2024-12-13T01:09:28.857655911Z" level=info msg="StopPodSandbox for \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\"" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.956 [INFO][4327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" iface="eth0" netns="/var/run/netns/cni-4f2cecfb-30c3-a168-fda1-c0f73b268d7f" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" iface="eth0" netns="/var/run/netns/cni-4f2cecfb-30c3-a168-fda1-c0f73b268d7f" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" iface="eth0" netns="/var/run/netns/cni-4f2cecfb-30c3-a168-fda1-c0f73b268d7f" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.981 [INFO][4340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.981 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.981 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.985 [WARNING][4340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.985 [INFO][4340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.986 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:28.990798 containerd[1470]: 2024-12-13 01:09:28.988 [INFO][4327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:28.993558 containerd[1470]: time="2024-12-13T01:09:28.993507118Z" level=info msg="TearDown network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\" successfully" Dec 13 01:09:28.993558 containerd[1470]: time="2024-12-13T01:09:28.993538378Z" level=info msg="StopPodSandbox for \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\" returns successfully" Dec 13 01:09:28.994271 systemd[1]: run-netns-cni\x2d4f2cecfb\x2d30c3\x2da168\x2dfda1\x2dc0f73b268d7f.mount: Deactivated successfully. Dec 13 01:09:28.994593 containerd[1470]: time="2024-12-13T01:09:28.994266809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f858bdd-n5nxj,Uid:02b88a36-d2b9-4dbc-acd0-c7e3095fe180,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" iface="eth0" netns="/var/run/netns/cni-23502f6f-2674-a1a1-dddf-96bf78f29724" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" iface="eth0" netns="/var/run/netns/cni-23502f6f-2674-a1a1-dddf-96bf78f29724" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" iface="eth0" netns="/var/run/netns/cni-23502f6f-2674-a1a1-dddf-96bf78f29724" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.957 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.982 [INFO][4341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.982 [INFO][4341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.986 [INFO][4341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.991 [WARNING][4341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.991 [INFO][4341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.992 [INFO][4341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:28.997534 containerd[1470]: 2024-12-13 01:09:28.995 [INFO][4326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:28.997904 containerd[1470]: time="2024-12-13T01:09:28.997809301Z" level=info msg="TearDown network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\" successfully" Dec 13 01:09:28.997904 containerd[1470]: time="2024-12-13T01:09:28.997832596Z" level=info msg="StopPodSandbox for \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\" returns successfully" Dec 13 01:09:28.998278 containerd[1470]: time="2024-12-13T01:09:28.998257148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f858bdd-ssrdn,Uid:845ed845-9b07-4cfb-b5d6-9248233c4e24,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:09:29.000909 systemd[1]: run-netns-cni\x2d23502f6f\x2d2674\x2da1a1\x2ddddf\x2d96bf78f29724.mount: Deactivated successfully. Dec 13 01:09:29.002868 kubelet[2604]: E1213 01:09:29.002846 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:29.113287 systemd-networkd[1401]: caliefaa081908b: Link UP Dec 13 01:09:29.115315 systemd-networkd[1401]: caliefaa081908b: Gained carrier Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.052 [INFO][4367] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0 calico-apiserver-87f858bdd- calico-apiserver 845ed845-9b07-4cfb-b5d6-9248233c4e24 921 0 2024-12-13 01:08:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87f858bdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-87f858bdd-ssrdn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliefaa081908b [] []}} ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-ssrdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.052 [INFO][4367] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-ssrdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.079 [INFO][4385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" HandleID="k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.087 [INFO][4385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" HandleID="k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcd70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-87f858bdd-ssrdn", "timestamp":"2024-12-13 01:09:29.079344864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.087 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.087 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.087 [INFO][4385] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.088 [INFO][4385] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.091 [INFO][4385] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.095 [INFO][4385] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.096 [INFO][4385] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.098 [INFO][4385] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.098 [INFO][4385] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.099 [INFO][4385] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.102 [INFO][4385] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.106 [INFO][4385] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.106 [INFO][4385] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" host="localhost" Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.106 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:29.129429 containerd[1470]: 2024-12-13 01:09:29.106 [INFO][4385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" HandleID="k8s-pod-network.8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:29.129995 containerd[1470]: 2024-12-13 01:09:29.109 [INFO][4367] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-ssrdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0", GenerateName:"calico-apiserver-87f858bdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"845ed845-9b07-4cfb-b5d6-9248233c4e24", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f858bdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-87f858bdd-ssrdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefaa081908b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:29.129995 containerd[1470]: 2024-12-13 01:09:29.109 [INFO][4367] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-ssrdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:29.129995 containerd[1470]: 2024-12-13 01:09:29.109 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefaa081908b ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-ssrdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:29.129995 containerd[1470]: 2024-12-13 01:09:29.113 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-ssrdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:29.129995 containerd[1470]: 2024-12-13 01:09:29.115 [INFO][4367] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-ssrdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0", GenerateName:"calico-apiserver-87f858bdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"845ed845-9b07-4cfb-b5d6-9248233c4e24", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f858bdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e", Pod:"calico-apiserver-87f858bdd-ssrdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefaa081908b", MAC:"b6:2b:ea:a1:91:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:29.129995 containerd[1470]: 2024-12-13 01:09:29.126 [INFO][4367] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-ssrdn" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:29.150175 systemd-networkd[1401]: cali5d98322a22f: Link UP Dec 13 01:09:29.151009 systemd-networkd[1401]: cali5d98322a22f: Gained carrier Dec 13 01:09:29.153520 containerd[1470]: time="2024-12-13T01:09:29.152705766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:29.153520 containerd[1470]: time="2024-12-13T01:09:29.153497248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:29.153709 containerd[1470]: time="2024-12-13T01:09:29.153659071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:29.153902 containerd[1470]: time="2024-12-13T01:09:29.153847687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.052 [INFO][4354] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0 calico-apiserver-87f858bdd- calico-apiserver 02b88a36-d2b9-4dbc-acd0-c7e3095fe180 922 0 2024-12-13 01:08:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87f858bdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-87f858bdd-n5nxj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d98322a22f [] []}} ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-n5nxj" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.052 [INFO][4354] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-n5nxj" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.079 [INFO][4383] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" HandleID="k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.089 [INFO][4383] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" HandleID="k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029ccc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-87f858bdd-n5nxj", "timestamp":"2024-12-13 01:09:29.079803562 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.089 [INFO][4383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.106 [INFO][4383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.106 [INFO][4383] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.109 [INFO][4383] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.112 [INFO][4383] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.119 [INFO][4383] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.123 [INFO][4383] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.127 [INFO][4383] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.127 [INFO][4383] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.129 [INFO][4383] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009 Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.135 [INFO][4383] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.140 [INFO][4383] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.141 [INFO][4383] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" host="localhost" Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.141 [INFO][4383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:29.167856 containerd[1470]: 2024-12-13 01:09:29.141 [INFO][4383] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" HandleID="k8s-pod-network.7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:29.168568 containerd[1470]: 2024-12-13 01:09:29.144 [INFO][4354] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-n5nxj" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0", GenerateName:"calico-apiserver-87f858bdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"02b88a36-d2b9-4dbc-acd0-c7e3095fe180", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f858bdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-87f858bdd-n5nxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d98322a22f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:29.168568 containerd[1470]: 2024-12-13 01:09:29.144 [INFO][4354] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-n5nxj" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:29.168568 containerd[1470]: 2024-12-13 01:09:29.144 [INFO][4354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d98322a22f ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-n5nxj" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:29.168568 containerd[1470]: 2024-12-13 01:09:29.152 [INFO][4354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-n5nxj" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:29.168568 containerd[1470]: 2024-12-13 01:09:29.153 [INFO][4354] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-n5nxj" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0", GenerateName:"calico-apiserver-87f858bdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"02b88a36-d2b9-4dbc-acd0-c7e3095fe180", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f858bdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009", Pod:"calico-apiserver-87f858bdd-n5nxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d98322a22f", MAC:"7e:62:a0:ad:76:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:29.168568 containerd[1470]: 2024-12-13 01:09:29.161 [INFO][4354] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009" Namespace="calico-apiserver" Pod="calico-apiserver-87f858bdd-n5nxj" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:29.173618 systemd[1]: Started cri-containerd-8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e.scope - libcontainer container 8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e. Dec 13 01:09:29.188476 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:09:29.191835 containerd[1470]: time="2024-12-13T01:09:29.191658446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:29.191961 containerd[1470]: time="2024-12-13T01:09:29.191878293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:29.191961 containerd[1470]: time="2024-12-13T01:09:29.191926636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:29.192063 containerd[1470]: time="2024-12-13T01:09:29.192032411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:29.211580 systemd[1]: Started cri-containerd-7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009.scope - libcontainer container 7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009. Dec 13 01:09:29.216143 containerd[1470]: time="2024-12-13T01:09:29.216108293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f858bdd-ssrdn,Uid:845ed845-9b07-4cfb-b5d6-9248233c4e24,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e\"" Dec 13 01:09:29.217916 containerd[1470]: time="2024-12-13T01:09:29.217838684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:09:29.224322 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:09:29.248025 containerd[1470]: time="2024-12-13T01:09:29.247978459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f858bdd-n5nxj,Uid:02b88a36-d2b9-4dbc-acd0-c7e3095fe180,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009\"" Dec 13 01:09:29.857688 containerd[1470]: time="2024-12-13T01:09:29.857610135Z" level=info msg="StopPodSandbox for \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\"" Dec 13 01:09:29.858159 containerd[1470]: time="2024-12-13T01:09:29.857617449Z" level=info msg="StopPodSandbox for \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\"" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.903 [INFO][4537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.903 [INFO][4537] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" iface="eth0" netns="/var/run/netns/cni-acf872b3-e376-9c23-6318-3061fde6ae87" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.903 [INFO][4537] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" iface="eth0" netns="/var/run/netns/cni-acf872b3-e376-9c23-6318-3061fde6ae87" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.903 [INFO][4537] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" iface="eth0" netns="/var/run/netns/cni-acf872b3-e376-9c23-6318-3061fde6ae87" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.903 [INFO][4537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.903 [INFO][4537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.924 [INFO][4554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.925 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.925 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.930 [WARNING][4554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.930 [INFO][4554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.931 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:29.936738 containerd[1470]: 2024-12-13 01:09:29.934 [INFO][4537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:29.936738 containerd[1470]: time="2024-12-13T01:09:29.936684741Z" level=info msg="TearDown network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\" successfully" Dec 13 01:09:29.936738 containerd[1470]: time="2024-12-13T01:09:29.936710751Z" level=info msg="StopPodSandbox for \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\" returns successfully" Dec 13 01:09:29.937387 containerd[1470]: time="2024-12-13T01:09:29.937345100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll64m,Uid:7ddefbe4-94ce-41d5-835d-00042427ce7d,Namespace:calico-system,Attempt:1,}" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.900 [INFO][4538] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.900 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" iface="eth0" netns="/var/run/netns/cni-4a835797-a5ef-e804-2143-4b284a200438" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.901 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" iface="eth0" netns="/var/run/netns/cni-4a835797-a5ef-e804-2143-4b284a200438" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.901 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" iface="eth0" netns="/var/run/netns/cni-4a835797-a5ef-e804-2143-4b284a200438" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.901 [INFO][4538] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.901 [INFO][4538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.926 [INFO][4553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.926 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.931 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.936 [WARNING][4553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.936 [INFO][4553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.937 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:29.943638 containerd[1470]: 2024-12-13 01:09:29.940 [INFO][4538] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:29.943638 containerd[1470]: time="2024-12-13T01:09:29.943424933Z" level=info msg="TearDown network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\" successfully" Dec 13 01:09:29.943638 containerd[1470]: time="2024-12-13T01:09:29.943473938Z" level=info msg="StopPodSandbox for \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\" returns successfully" Dec 13 01:09:29.944154 containerd[1470]: time="2024-12-13T01:09:29.944117324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79d779859c-vbhrm,Uid:948371ee-1334-4913-b824-f4d34d66addf,Namespace:calico-system,Attempt:1,}" Dec 13 01:09:29.999808 systemd[1]: run-netns-cni\x2d4a835797\x2da5ef\x2de804\x2d2143\x2d4b284a200438.mount: Deactivated successfully. Dec 13 01:09:29.999930 systemd[1]: run-netns-cni\x2dacf872b3\x2de376\x2d9c23\x2d6318\x2d3061fde6ae87.mount: Deactivated successfully. Dec 13 01:09:30.082321 systemd-networkd[1401]: calie0f779e2371: Link UP Dec 13 01:09:30.082592 systemd-networkd[1401]: calie0f779e2371: Gained carrier Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:29.992 [INFO][4568] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ll64m-eth0 csi-node-driver- calico-system 7ddefbe4-94ce-41d5-835d-00042427ce7d 940 0 2024-12-13 01:08:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ll64m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie0f779e2371 [] []}} ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Namespace="calico-system" Pod="csi-node-driver-ll64m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll64m-" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:29.993 [INFO][4568] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Namespace="calico-system" Pod="csi-node-driver-ll64m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.039 [INFO][4597] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" HandleID="k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.050 [INFO][4597] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" HandleID="k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ll64m", "timestamp":"2024-12-13 01:09:30.039626904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.050 [INFO][4597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.050 [INFO][4597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.051 [INFO][4597] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.052 [INFO][4597] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.058 [INFO][4597] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.061 [INFO][4597] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.063 [INFO][4597] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.064 [INFO][4597] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.065 [INFO][4597] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.066 [INFO][4597] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3 Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.069 [INFO][4597] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.074 [INFO][4597] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.074 [INFO][4597] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" host="localhost" Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.074 [INFO][4597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:30.095872 containerd[1470]: 2024-12-13 01:09:30.074 [INFO][4597] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" HandleID="k8s-pod-network.d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:30.096957 containerd[1470]: 2024-12-13 01:09:30.077 [INFO][4568] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Namespace="calico-system" Pod="csi-node-driver-ll64m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll64m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ll64m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ddefbe4-94ce-41d5-835d-00042427ce7d", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ll64m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0f779e2371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:30.096957 containerd[1470]: 2024-12-13 01:09:30.078 [INFO][4568] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Namespace="calico-system" Pod="csi-node-driver-ll64m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:30.096957 containerd[1470]: 2024-12-13 01:09:30.078 [INFO][4568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0f779e2371 ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Namespace="calico-system" Pod="csi-node-driver-ll64m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:30.096957 containerd[1470]: 2024-12-13 01:09:30.080 [INFO][4568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Namespace="calico-system" Pod="csi-node-driver-ll64m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:30.096957 containerd[1470]: 2024-12-13 01:09:30.081 [INFO][4568] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Namespace="calico-system" Pod="csi-node-driver-ll64m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll64m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ll64m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ddefbe4-94ce-41d5-835d-00042427ce7d", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3", Pod:"csi-node-driver-ll64m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0f779e2371", MAC:"8e:c0:d5:3d:2e:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:30.096957 containerd[1470]: 2024-12-13 01:09:30.087 [INFO][4568] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3" Namespace="calico-system" Pod="csi-node-driver-ll64m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:30.123845 containerd[1470]: time="2024-12-13T01:09:30.123577994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:30.124110 containerd[1470]: time="2024-12-13T01:09:30.123824632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:30.124110 containerd[1470]: time="2024-12-13T01:09:30.124034188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:30.125487 containerd[1470]: time="2024-12-13T01:09:30.124717651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:30.125200 systemd-networkd[1401]: cali8fdb9e0aeb0: Link UP Dec 13 01:09:30.126199 systemd-networkd[1401]: cali8fdb9e0aeb0: Gained carrier Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.005 [INFO][4579] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0 calico-kube-controllers-79d779859c- calico-system 948371ee-1334-4913-b824-f4d34d66addf 939 0 2024-12-13 01:09:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79d779859c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-79d779859c-vbhrm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8fdb9e0aeb0 [] []}} ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Namespace="calico-system" Pod="calico-kube-controllers-79d779859c-vbhrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.005 [INFO][4579] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Namespace="calico-system" Pod="calico-kube-controllers-79d779859c-vbhrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.050 [INFO][4602] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" HandleID="k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.059 [INFO][4602] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" HandleID="k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000533880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-79d779859c-vbhrm", "timestamp":"2024-12-13 01:09:30.050661946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.059 [INFO][4602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.075 [INFO][4602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.075 [INFO][4602] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.076 [INFO][4602] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.084 [INFO][4602] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.091 [INFO][4602] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.093 [INFO][4602] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.095 [INFO][4602] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.095 [INFO][4602] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.097 [INFO][4602] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.101 [INFO][4602] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.111 [INFO][4602] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.111 [INFO][4602] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" host="localhost" Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.111 [INFO][4602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:30.144691 containerd[1470]: 2024-12-13 01:09:30.111 [INFO][4602] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" HandleID="k8s-pod-network.223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:30.145220 containerd[1470]: 2024-12-13 01:09:30.119 [INFO][4579] cni-plugin/k8s.go 386: Populated endpoint ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Namespace="calico-system" Pod="calico-kube-controllers-79d779859c-vbhrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0", GenerateName:"calico-kube-controllers-79d779859c-", Namespace:"calico-system", SelfLink:"", UID:"948371ee-1334-4913-b824-f4d34d66addf", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79d779859c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-79d779859c-vbhrm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8fdb9e0aeb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:30.145220 containerd[1470]: 2024-12-13 01:09:30.119 [INFO][4579] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Namespace="calico-system" Pod="calico-kube-controllers-79d779859c-vbhrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:30.145220 containerd[1470]: 2024-12-13 01:09:30.119 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fdb9e0aeb0 ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Namespace="calico-system" Pod="calico-kube-controllers-79d779859c-vbhrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:30.145220 containerd[1470]: 2024-12-13 01:09:30.126 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Namespace="calico-system" Pod="calico-kube-controllers-79d779859c-vbhrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:30.145220 containerd[1470]: 2024-12-13 01:09:30.127 [INFO][4579] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Namespace="calico-system" Pod="calico-kube-controllers-79d779859c-vbhrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0", GenerateName:"calico-kube-controllers-79d779859c-", Namespace:"calico-system", SelfLink:"", UID:"948371ee-1334-4913-b824-f4d34d66addf", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79d779859c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf", Pod:"calico-kube-controllers-79d779859c-vbhrm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8fdb9e0aeb0", MAC:"26:47:d9:ae:e9:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:30.145220 containerd[1470]: 2024-12-13 01:09:30.139 [INFO][4579] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf" Namespace="calico-system" Pod="calico-kube-controllers-79d779859c-vbhrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:30.154712 systemd[1]: Started cri-containerd-d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3.scope - libcontainer container d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3. Dec 13 01:09:30.174292 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:09:30.177008 containerd[1470]: time="2024-12-13T01:09:30.175225940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:30.177008 containerd[1470]: time="2024-12-13T01:09:30.176778726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:30.177008 containerd[1470]: time="2024-12-13T01:09:30.176799315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:30.177008 containerd[1470]: time="2024-12-13T01:09:30.176890552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:30.188698 containerd[1470]: time="2024-12-13T01:09:30.188653484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll64m,Uid:7ddefbe4-94ce-41d5-835d-00042427ce7d,Namespace:calico-system,Attempt:1,} returns sandbox id \"d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3\"" Dec 13 01:09:30.202578 systemd[1]: Started cri-containerd-223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf.scope - libcontainer container 223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf. Dec 13 01:09:30.214573 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:09:30.240877 containerd[1470]: time="2024-12-13T01:09:30.240828049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79d779859c-vbhrm,Uid:948371ee-1334-4913-b824-f4d34d66addf,Namespace:calico-system,Attempt:1,} returns sandbox id \"223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf\"" Dec 13 01:09:30.378601 systemd-networkd[1401]: caliefaa081908b: Gained IPv6LL Dec 13 01:09:30.863807 containerd[1470]: time="2024-12-13T01:09:30.863487363Z" level=info msg="StopPodSandbox for \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\"" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.912 [INFO][4739] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.912 [INFO][4739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" iface="eth0" netns="/var/run/netns/cni-9bc05257-f926-f25f-d1aa-472224a91eff" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.912 [INFO][4739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" iface="eth0" netns="/var/run/netns/cni-9bc05257-f926-f25f-d1aa-472224a91eff" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.913 [INFO][4739] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" iface="eth0" netns="/var/run/netns/cni-9bc05257-f926-f25f-d1aa-472224a91eff" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.913 [INFO][4739] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.913 [INFO][4739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.957 [INFO][4747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.957 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.957 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.962 [WARNING][4747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.962 [INFO][4747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.963 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:30.969349 containerd[1470]: 2024-12-13 01:09:30.966 [INFO][4739] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:30.969774 containerd[1470]: time="2024-12-13T01:09:30.969665624Z" level=info msg="TearDown network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\" successfully" Dec 13 01:09:30.969774 containerd[1470]: time="2024-12-13T01:09:30.969691825Z" level=info msg="StopPodSandbox for \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\" returns successfully" Dec 13 01:09:30.970046 kubelet[2604]: E1213 01:09:30.970012 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:30.970949 containerd[1470]: time="2024-12-13T01:09:30.970638628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hrntc,Uid:9ca75959-8db7-4b67-a9a1-33128730b6d4,Namespace:kube-system,Attempt:1,}" Dec 13 01:09:30.995106 systemd[1]: run-netns-cni\x2d9bc05257\x2df926\x2df25f\x2dd1aa\x2d472224a91eff.mount: Deactivated successfully. Dec 13 01:09:31.082682 systemd-networkd[1401]: cali5d98322a22f: Gained IPv6LL Dec 13 01:09:31.087898 systemd-networkd[1401]: calia3db58d90c5: Link UP Dec 13 01:09:31.088208 systemd-networkd[1401]: calia3db58d90c5: Gained carrier Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.019 [INFO][4756] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0 coredns-7db6d8ff4d- kube-system 9ca75959-8db7-4b67-a9a1-33128730b6d4 954 0 2024-12-13 01:08:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-hrntc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3db58d90c5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrntc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hrntc-" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.019 [INFO][4756] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrntc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.048 [INFO][4772] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" HandleID="k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.055 [INFO][4772] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" HandleID="k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027fda0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-hrntc", "timestamp":"2024-12-13 01:09:31.048410942 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.055 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.055 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.055 [INFO][4772] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.058 [INFO][4772] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.062 [INFO][4772] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.065 [INFO][4772] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.066 [INFO][4772] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.069 [INFO][4772] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.069 [INFO][4772] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.070 [INFO][4772] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14 Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.073 [INFO][4772] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.080 [INFO][4772] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.080 [INFO][4772] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" host="localhost" Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.080 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:31.099094 containerd[1470]: 2024-12-13 01:09:31.080 [INFO][4772] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" HandleID="k8s-pod-network.8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:31.100174 containerd[1470]: 2024-12-13 01:09:31.085 [INFO][4756] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrntc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9ca75959-8db7-4b67-a9a1-33128730b6d4", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-hrntc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3db58d90c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:31.100174 containerd[1470]: 2024-12-13 01:09:31.085 [INFO][4756] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrntc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:31.100174 containerd[1470]: 2024-12-13 01:09:31.085 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3db58d90c5 ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrntc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:31.100174 containerd[1470]: 2024-12-13 01:09:31.087 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrntc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:31.100174 containerd[1470]: 2024-12-13 01:09:31.088 [INFO][4756] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrntc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9ca75959-8db7-4b67-a9a1-33128730b6d4", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14", Pod:"coredns-7db6d8ff4d-hrntc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3db58d90c5", MAC:"12:c7:38:00:68:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:31.100174 containerd[1470]: 2024-12-13 01:09:31.096 [INFO][4756] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrntc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:31.305257 containerd[1470]: time="2024-12-13T01:09:31.304639615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:31.305392 containerd[1470]: time="2024-12-13T01:09:31.305270346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:31.305392 containerd[1470]: time="2024-12-13T01:09:31.305288271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:31.305392 containerd[1470]: time="2024-12-13T01:09:31.305377043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:31.318781 containerd[1470]: time="2024-12-13T01:09:31.317667280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:09:31.318781 containerd[1470]: time="2024-12-13T01:09:31.318296176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:31.319676 containerd[1470]: time="2024-12-13T01:09:31.319637262Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:31.320218 containerd[1470]: time="2024-12-13T01:09:31.320186936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:31.321674 containerd[1470]: time="2024-12-13T01:09:31.321358843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.103480292s" Dec 13 01:09:31.321674 containerd[1470]: time="2024-12-13T01:09:31.321388601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:09:31.323878 containerd[1470]: time="2024-12-13T01:09:31.322596068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:09:31.324201 containerd[1470]: time="2024-12-13T01:09:31.324170054Z" level=info msg="CreateContainer within sandbox \"8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:09:31.327589 systemd[1]: Started cri-containerd-8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14.scope - libcontainer container 8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14. Dec 13 01:09:31.338644 systemd-networkd[1401]: cali8fdb9e0aeb0: Gained IPv6LL Dec 13 01:09:31.343721 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:09:31.344862 containerd[1470]: time="2024-12-13T01:09:31.344824839Z" level=info msg="CreateContainer within sandbox \"8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"90076325d41737c19246e3c33576a4a3d619d00ebce637362301aa73fd9b4302\"" Dec 13 01:09:31.345637 containerd[1470]: time="2024-12-13T01:09:31.345608716Z" level=info msg="StartContainer for \"90076325d41737c19246e3c33576a4a3d619d00ebce637362301aa73fd9b4302\"" Dec 13 01:09:31.369370 containerd[1470]: time="2024-12-13T01:09:31.369051345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hrntc,Uid:9ca75959-8db7-4b67-a9a1-33128730b6d4,Namespace:kube-system,Attempt:1,} returns sandbox id \"8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14\"" Dec 13 01:09:31.370250 kubelet[2604]: E1213 01:09:31.370225 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:31.372802 containerd[1470]: time="2024-12-13T01:09:31.372761686Z" level=info msg="CreateContainer within sandbox \"8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:09:31.374610 systemd[1]: Started cri-containerd-90076325d41737c19246e3c33576a4a3d619d00ebce637362301aa73fd9b4302.scope - libcontainer container 90076325d41737c19246e3c33576a4a3d619d00ebce637362301aa73fd9b4302. Dec 13 01:09:31.389559 containerd[1470]: time="2024-12-13T01:09:31.389525049Z" level=info msg="CreateContainer within sandbox \"8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3eab0549407341d34c17b4f14ac125f11ade68e5abddbd41614a1650ec3aa31\"" Dec 13 01:09:31.390107 containerd[1470]: time="2024-12-13T01:09:31.390075074Z" level=info msg="StartContainer for \"f3eab0549407341d34c17b4f14ac125f11ade68e5abddbd41614a1650ec3aa31\"" Dec 13 01:09:31.417602 systemd[1]: Started cri-containerd-f3eab0549407341d34c17b4f14ac125f11ade68e5abddbd41614a1650ec3aa31.scope - libcontainer container f3eab0549407341d34c17b4f14ac125f11ade68e5abddbd41614a1650ec3aa31. Dec 13 01:09:31.421765 containerd[1470]: time="2024-12-13T01:09:31.421722213Z" level=info msg="StartContainer for \"90076325d41737c19246e3c33576a4a3d619d00ebce637362301aa73fd9b4302\" returns successfully" Dec 13 01:09:31.445776 containerd[1470]: time="2024-12-13T01:09:31.445730898Z" level=info msg="StartContainer for \"f3eab0549407341d34c17b4f14ac125f11ade68e5abddbd41614a1650ec3aa31\" returns successfully" Dec 13 01:09:31.733590 containerd[1470]: time="2024-12-13T01:09:31.733528330Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:31.734813 containerd[1470]: time="2024-12-13T01:09:31.734264183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:09:31.736476 containerd[1470]: time="2024-12-13T01:09:31.736433802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 413.810362ms" Dec 13 01:09:31.736530 containerd[1470]: time="2024-12-13T01:09:31.736479060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:09:31.737837 containerd[1470]: time="2024-12-13T01:09:31.737814915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:09:31.738739 containerd[1470]: time="2024-12-13T01:09:31.738689568Z" level=info msg="CreateContainer within sandbox \"7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:09:31.750629 containerd[1470]: time="2024-12-13T01:09:31.750583838Z" level=info msg="CreateContainer within sandbox \"7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e8f6107cae121522a57e9dc681d0cf666c06bf20031d29fcc543140956d1e0d7\"" Dec 13 01:09:31.751423 containerd[1470]: time="2024-12-13T01:09:31.751370871Z" level=info msg="StartContainer for \"e8f6107cae121522a57e9dc681d0cf666c06bf20031d29fcc543140956d1e0d7\"" Dec 13 01:09:31.784581 systemd[1]: Started cri-containerd-e8f6107cae121522a57e9dc681d0cf666c06bf20031d29fcc543140956d1e0d7.scope - libcontainer container e8f6107cae121522a57e9dc681d0cf666c06bf20031d29fcc543140956d1e0d7. Dec 13 01:09:31.823862 containerd[1470]: time="2024-12-13T01:09:31.823821508Z" level=info msg="StartContainer for \"e8f6107cae121522a57e9dc681d0cf666c06bf20031d29fcc543140956d1e0d7\" returns successfully" Dec 13 01:09:31.915675 systemd-networkd[1401]: calie0f779e2371: Gained IPv6LL Dec 13 01:09:32.027018 kubelet[2604]: E1213 01:09:32.026369 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:32.035605 kubelet[2604]: I1213 01:09:32.035556 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hrntc" podStartSLOduration=39.035542308 podStartE2EDuration="39.035542308s" podCreationTimestamp="2024-12-13 01:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:09:32.034750465 +0000 UTC m=+54.370427391" watchObservedRunningTime="2024-12-13 01:09:32.035542308 +0000 UTC m=+54.371219234" Dec 13 01:09:32.058603 kubelet[2604]: I1213 01:09:32.058236 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87f858bdd-n5nxj" podStartSLOduration=30.570666759 podStartE2EDuration="33.058220626s" podCreationTimestamp="2024-12-13 01:08:59 +0000 UTC" firstStartedPulling="2024-12-13 01:09:29.249622893 +0000 UTC m=+51.585299829" lastFinishedPulling="2024-12-13 01:09:31.73717676 +0000 UTC m=+54.072853696" observedRunningTime="2024-12-13 01:09:32.050195729 +0000 UTC m=+54.385872665" watchObservedRunningTime="2024-12-13 01:09:32.058220626 +0000 UTC m=+54.393897562" Dec 13 01:09:32.143029 kubelet[2604]: I1213 01:09:32.142979 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87f858bdd-ssrdn" podStartSLOduration=31.038298917 podStartE2EDuration="33.142964323s" podCreationTimestamp="2024-12-13 01:08:59 +0000 UTC" firstStartedPulling="2024-12-13 01:09:29.21750628 +0000 UTC m=+51.553183216" lastFinishedPulling="2024-12-13 01:09:31.322171686 +0000 UTC m=+53.657848622" observedRunningTime="2024-12-13 01:09:32.058497021 +0000 UTC m=+54.394173977" watchObservedRunningTime="2024-12-13 01:09:32.142964323 +0000 UTC m=+54.478641259" Dec 13 01:09:32.472600 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:42206.service - OpenSSH per-connection server daemon (10.0.0.1:42206). Dec 13 01:09:32.523579 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 42206 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:32.525453 sshd[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:32.535156 systemd-logind[1448]: New session 16 of user core. Dec 13 01:09:32.543734 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:09:32.669595 sshd[4962]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:32.673589 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:42206.service: Deactivated successfully. Dec 13 01:09:32.675598 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:09:32.676314 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:09:32.677326 systemd-logind[1448]: Removed session 16. Dec 13 01:09:32.683640 systemd-networkd[1401]: calia3db58d90c5: Gained IPv6LL Dec 13 01:09:33.023374 containerd[1470]: time="2024-12-13T01:09:33.023318988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:33.024084 containerd[1470]: time="2024-12-13T01:09:33.024022710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:09:33.025288 containerd[1470]: time="2024-12-13T01:09:33.025258961Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:33.027151 containerd[1470]: time="2024-12-13T01:09:33.027080424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:33.027775 containerd[1470]: time="2024-12-13T01:09:33.027744068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.289904666s" Dec 13 01:09:33.027817 containerd[1470]: time="2024-12-13T01:09:33.027774497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:09:33.029119 containerd[1470]: time="2024-12-13T01:09:33.028908841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:09:33.030973 containerd[1470]: time="2024-12-13T01:09:33.030431958Z" level=info msg="CreateContainer within sandbox \"d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:09:33.031748 kubelet[2604]: I1213 01:09:33.031724 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:09:33.032831 kubelet[2604]: E1213 01:09:33.032731 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:33.046448 containerd[1470]: time="2024-12-13T01:09:33.046390347Z" level=info msg="CreateContainer within sandbox \"d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"66a43d5a3504984c08e09eaaab8c3fce0fb348c05e8bb6300459270613e5617f\"" Dec 13 01:09:33.046880 containerd[1470]: time="2024-12-13T01:09:33.046844986Z" level=info msg="StartContainer for \"66a43d5a3504984c08e09eaaab8c3fce0fb348c05e8bb6300459270613e5617f\"" Dec 13 01:09:33.071292 systemd[1]: run-containerd-runc-k8s.io-66a43d5a3504984c08e09eaaab8c3fce0fb348c05e8bb6300459270613e5617f-runc.LR2A7W.mount: Deactivated successfully. Dec 13 01:09:33.079588 systemd[1]: Started cri-containerd-66a43d5a3504984c08e09eaaab8c3fce0fb348c05e8bb6300459270613e5617f.scope - libcontainer container 66a43d5a3504984c08e09eaaab8c3fce0fb348c05e8bb6300459270613e5617f. Dec 13 01:09:33.109801 containerd[1470]: time="2024-12-13T01:09:33.109762413Z" level=info msg="StartContainer for \"66a43d5a3504984c08e09eaaab8c3fce0fb348c05e8bb6300459270613e5617f\" returns successfully" Dec 13 01:09:35.035527 containerd[1470]: time="2024-12-13T01:09:35.035473642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:35.036219 containerd[1470]: time="2024-12-13T01:09:35.036135845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:09:35.037821 containerd[1470]: time="2024-12-13T01:09:35.037777399Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:35.039895 containerd[1470]: time="2024-12-13T01:09:35.039858175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:35.040615 containerd[1470]: time="2024-12-13T01:09:35.040578035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.011632475s" Dec 13 01:09:35.040659 containerd[1470]: time="2024-12-13T01:09:35.040613902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:09:35.042054 containerd[1470]: time="2024-12-13T01:09:35.041613369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:09:35.054645 containerd[1470]: time="2024-12-13T01:09:35.053885751Z" level=info msg="CreateContainer within sandbox \"223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:09:35.076017 containerd[1470]: time="2024-12-13T01:09:35.071301298Z" level=info msg="CreateContainer within sandbox \"223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f7df24997196075746504089499208b58f0fd0171c080aee0913dacec3e7ee05\"" Dec 13 01:09:35.076638 containerd[1470]: time="2024-12-13T01:09:35.076606604Z" level=info msg="StartContainer for \"f7df24997196075746504089499208b58f0fd0171c080aee0913dacec3e7ee05\"" Dec 13 01:09:35.111580 systemd[1]: Started cri-containerd-f7df24997196075746504089499208b58f0fd0171c080aee0913dacec3e7ee05.scope - libcontainer container f7df24997196075746504089499208b58f0fd0171c080aee0913dacec3e7ee05. Dec 13 01:09:35.161714 containerd[1470]: time="2024-12-13T01:09:35.161616449Z" level=info msg="StartContainer for \"f7df24997196075746504089499208b58f0fd0171c080aee0913dacec3e7ee05\" returns successfully" Dec 13 01:09:36.085496 kubelet[2604]: I1213 01:09:36.085369 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79d779859c-vbhrm" podStartSLOduration=31.286039571 podStartE2EDuration="36.085350775s" podCreationTimestamp="2024-12-13 01:09:00 +0000 UTC" firstStartedPulling="2024-12-13 01:09:30.24211003 +0000 UTC m=+52.577786966" lastFinishedPulling="2024-12-13 01:09:35.041421234 +0000 UTC m=+57.377098170" observedRunningTime="2024-12-13 01:09:36.084629131 +0000 UTC m=+58.420306067" watchObservedRunningTime="2024-12-13 01:09:36.085350775 +0000 UTC m=+58.421027721" Dec 13 01:09:37.001239 containerd[1470]: time="2024-12-13T01:09:37.001177350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:37.001898 containerd[1470]: time="2024-12-13T01:09:37.001845407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:09:37.002948 containerd[1470]: time="2024-12-13T01:09:37.002905130Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:37.005095 containerd[1470]: time="2024-12-13T01:09:37.005024937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:37.005595 containerd[1470]: time="2024-12-13T01:09:37.005552915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.963905694s" Dec 13 01:09:37.005643 containerd[1470]: time="2024-12-13T01:09:37.005594873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:09:37.007536 containerd[1470]: time="2024-12-13T01:09:37.007511012Z" level=info msg="CreateContainer within sandbox \"d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:09:37.021324 containerd[1470]: time="2024-12-13T01:09:37.021269941Z" level=info msg="CreateContainer within sandbox \"d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"60d2f3e287f4331170e71ab180f6c0a8b7101ed6c60bcc9c83742b5f627c1932\"" Dec 13 01:09:37.021797 containerd[1470]: time="2024-12-13T01:09:37.021771380Z" level=info msg="StartContainer for \"60d2f3e287f4331170e71ab180f6c0a8b7101ed6c60bcc9c83742b5f627c1932\"" Dec 13 01:09:37.052594 systemd[1]: Started cri-containerd-60d2f3e287f4331170e71ab180f6c0a8b7101ed6c60bcc9c83742b5f627c1932.scope - libcontainer container 60d2f3e287f4331170e71ab180f6c0a8b7101ed6c60bcc9c83742b5f627c1932. Dec 13 01:09:37.083823 containerd[1470]: time="2024-12-13T01:09:37.083691206Z" level=info msg="StartContainer for \"60d2f3e287f4331170e71ab180f6c0a8b7101ed6c60bcc9c83742b5f627c1932\" returns successfully" Dec 13 01:09:37.681516 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:42208.service - OpenSSH per-connection server daemon (10.0.0.1:42208). Dec 13 01:09:37.724564 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 42208 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:37.726346 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:37.730149 systemd-logind[1448]: New session 17 of user core. Dec 13 01:09:37.737544 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:09:37.740639 containerd[1470]: time="2024-12-13T01:09:37.740585719Z" level=info msg="StopPodSandbox for \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\"" Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.776 [WARNING][5139] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ab737b6d-349c-469a-b31b-6775293b8eb1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c", Pod:"coredns-7db6d8ff4d-jnh4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibef87ffe9dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.776 [INFO][5139] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.776 [INFO][5139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" iface="eth0" netns="" Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.776 [INFO][5139] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.776 [INFO][5139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.795 [INFO][5147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.795 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.795 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.801 [WARNING][5147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.801 [INFO][5147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.803 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:37.808064 containerd[1470]: 2024-12-13 01:09:37.805 [INFO][5139] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:37.808742 containerd[1470]: time="2024-12-13T01:09:37.808105019Z" level=info msg="TearDown network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\" successfully" Dec 13 01:09:37.808742 containerd[1470]: time="2024-12-13T01:09:37.808131317Z" level=info msg="StopPodSandbox for \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\" returns successfully" Dec 13 01:09:37.815273 containerd[1470]: time="2024-12-13T01:09:37.815233986Z" level=info msg="RemovePodSandbox for \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\"" Dec 13 01:09:37.817318 containerd[1470]: time="2024-12-13T01:09:37.817288291Z" level=info msg="Forcibly stopping sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\"" Dec 13 01:09:37.862997 sshd[5120]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:37.867206 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:42208.service: Deactivated successfully. Dec 13 01:09:37.869057 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:09:37.869899 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:09:37.871085 systemd-logind[1448]: Removed session 17. Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.849 [WARNING][5178] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ab737b6d-349c-469a-b31b-6775293b8eb1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4be71aeff5973f8e6b8a8e10d17267cf023f9526541ded475c675021da56a37c", Pod:"coredns-7db6d8ff4d-jnh4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibef87ffe9dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.849 [INFO][5178] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.849 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" iface="eth0" netns="" Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.849 [INFO][5178] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.849 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.875 [INFO][5187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.875 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.875 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.879 [WARNING][5187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.880 [INFO][5187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" HandleID="k8s-pod-network.ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Workload="localhost-k8s-coredns--7db6d8ff4d--jnh4m-eth0" Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.881 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:37.886559 containerd[1470]: 2024-12-13 01:09:37.884 [INFO][5178] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff" Dec 13 01:09:37.886963 containerd[1470]: time="2024-12-13T01:09:37.886600593Z" level=info msg="TearDown network for sandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\" successfully" Dec 13 01:09:37.944223 kubelet[2604]: I1213 01:09:37.944124 2604 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:09:37.944223 kubelet[2604]: I1213 01:09:37.944161 2604 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:09:37.946313 containerd[1470]: time="2024-12-13T01:09:37.946258440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:09:37.946375 containerd[1470]: time="2024-12-13T01:09:37.946344700Z" level=info msg="RemovePodSandbox \"ab2ce378b6c9acd19f0f87fdbe7f44b2e29ffd0f0b4fd60177c114d130fb2bff\" returns successfully" Dec 13 01:09:37.947159 containerd[1470]: time="2024-12-13T01:09:37.947127410Z" level=info msg="StopPodSandbox for \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\"" Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:37.982 [WARNING][5213] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0", GenerateName:"calico-kube-controllers-79d779859c-", Namespace:"calico-system", SelfLink:"", UID:"948371ee-1334-4913-b824-f4d34d66addf", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79d779859c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf", Pod:"calico-kube-controllers-79d779859c-vbhrm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8fdb9e0aeb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:37.982 [INFO][5213] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:37.982 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" iface="eth0" netns="" Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:37.982 [INFO][5213] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:37.982 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:38.012 [INFO][5220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:38.012 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:38.012 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:38.020 [WARNING][5220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:38.020 [INFO][5220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:38.022 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.030026 containerd[1470]: 2024-12-13 01:09:38.024 [INFO][5213] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:38.030782 containerd[1470]: time="2024-12-13T01:09:38.030059971Z" level=info msg="TearDown network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\" successfully" Dec 13 01:09:38.030782 containerd[1470]: time="2024-12-13T01:09:38.030086371Z" level=info msg="StopPodSandbox for \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\" returns successfully" Dec 13 01:09:38.030782 containerd[1470]: time="2024-12-13T01:09:38.030322919Z" level=info msg="RemovePodSandbox for \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\"" Dec 13 01:09:38.030782 containerd[1470]: time="2024-12-13T01:09:38.030346242Z" level=info msg="Forcibly stopping sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\"" Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.085 [WARNING][5245] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0", GenerateName:"calico-kube-controllers-79d779859c-", Namespace:"calico-system", SelfLink:"", UID:"948371ee-1334-4913-b824-f4d34d66addf", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79d779859c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"223a0435850277ab95e3524b5698a18c64f5a4eb07bcb51a929be6d0fb0fcbdf", Pod:"calico-kube-controllers-79d779859c-vbhrm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8fdb9e0aeb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.086 [INFO][5245] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.086 [INFO][5245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" iface="eth0" netns="" Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.086 [INFO][5245] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.086 [INFO][5245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.105 [INFO][5253] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.105 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.105 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.110 [WARNING][5253] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.110 [INFO][5253] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" HandleID="k8s-pod-network.39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Workload="localhost-k8s-calico--kube--controllers--79d779859c--vbhrm-eth0" Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.111 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.116049 containerd[1470]: 2024-12-13 01:09:38.113 [INFO][5245] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336" Dec 13 01:09:38.116518 containerd[1470]: time="2024-12-13T01:09:38.116102202Z" level=info msg="TearDown network for sandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\" successfully" Dec 13 01:09:38.120190 containerd[1470]: time="2024-12-13T01:09:38.120144278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:09:38.120362 containerd[1470]: time="2024-12-13T01:09:38.120212635Z" level=info msg="RemovePodSandbox \"39fc4b41b4dfee28d1969dca5b2544ea6a6a57f79a0e795c70b1f4ca9eff1336\" returns successfully" Dec 13 01:09:38.120811 containerd[1470]: time="2024-12-13T01:09:38.120769828Z" level=info msg="StopPodSandbox for \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\"" Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.156 [WARNING][5275] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ll64m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ddefbe4-94ce-41d5-835d-00042427ce7d", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3", Pod:"csi-node-driver-ll64m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0f779e2371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.156 [INFO][5275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.156 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" iface="eth0" netns="" Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.156 [INFO][5275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.156 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.176 [INFO][5282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.176 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.177 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.181 [WARNING][5282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.181 [INFO][5282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.182 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.186927 containerd[1470]: 2024-12-13 01:09:38.184 [INFO][5275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:38.187716 containerd[1470]: time="2024-12-13T01:09:38.186963268Z" level=info msg="TearDown network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\" successfully" Dec 13 01:09:38.187716 containerd[1470]: time="2024-12-13T01:09:38.186993064Z" level=info msg="StopPodSandbox for \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\" returns successfully" Dec 13 01:09:38.187716 containerd[1470]: time="2024-12-13T01:09:38.187579681Z" level=info msg="RemovePodSandbox for \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\"" Dec 13 01:09:38.187716 containerd[1470]: time="2024-12-13T01:09:38.187609126Z" level=info msg="Forcibly stopping sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\"" Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.223 [WARNING][5307] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ll64m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ddefbe4-94ce-41d5-835d-00042427ce7d", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3bb4239991fcaa9da6c049c357123892567ce8b9e35a1a9df884fe78c730db3", Pod:"csi-node-driver-ll64m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0f779e2371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.224 [INFO][5307] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.224 [INFO][5307] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" iface="eth0" netns="" Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.224 [INFO][5307] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.224 [INFO][5307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.247 [INFO][5314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.247 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.247 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.252 [WARNING][5314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.252 [INFO][5314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" HandleID="k8s-pod-network.9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Workload="localhost-k8s-csi--node--driver--ll64m-eth0" Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.253 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.262473 containerd[1470]: 2024-12-13 01:09:38.256 [INFO][5307] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5" Dec 13 01:09:38.262473 containerd[1470]: time="2024-12-13T01:09:38.260662638Z" level=info msg="TearDown network for sandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\" successfully" Dec 13 01:09:38.269884 containerd[1470]: time="2024-12-13T01:09:38.269832024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:09:38.270055 containerd[1470]: time="2024-12-13T01:09:38.269900761Z" level=info msg="RemovePodSandbox \"9a0cdca7fddcf8f1f6fc5c87cabe98fc404e6b131abf1d1c137bf37aa79264e5\" returns successfully" Dec 13 01:09:38.270492 containerd[1470]: time="2024-12-13T01:09:38.270426736Z" level=info msg="StopPodSandbox for \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\"" Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.301 [WARNING][5336] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0", GenerateName:"calico-apiserver-87f858bdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"845ed845-9b07-4cfb-b5d6-9248233c4e24", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f858bdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e", Pod:"calico-apiserver-87f858bdd-ssrdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefaa081908b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.301 [INFO][5336] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.301 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" iface="eth0" netns="" Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.301 [INFO][5336] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.301 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.320 [INFO][5343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.320 [INFO][5343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.320 [INFO][5343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.324 [WARNING][5343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.324 [INFO][5343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.326 [INFO][5343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.330353 containerd[1470]: 2024-12-13 01:09:38.328 [INFO][5336] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:38.330782 containerd[1470]: time="2024-12-13T01:09:38.330399690Z" level=info msg="TearDown network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\" successfully" Dec 13 01:09:38.330782 containerd[1470]: time="2024-12-13T01:09:38.330430938Z" level=info msg="StopPodSandbox for \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\" returns successfully" Dec 13 01:09:38.331019 containerd[1470]: time="2024-12-13T01:09:38.330982070Z" level=info msg="RemovePodSandbox for \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\"" Dec 13 01:09:38.331060 containerd[1470]: time="2024-12-13T01:09:38.331016945Z" level=info msg="Forcibly stopping sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\"" Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.364 [WARNING][5366] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0", GenerateName:"calico-apiserver-87f858bdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"845ed845-9b07-4cfb-b5d6-9248233c4e24", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f858bdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8caaaced65dcde205403ee1a57498801c73995cde4ba8d28f7682af8b7b08a1e", Pod:"calico-apiserver-87f858bdd-ssrdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefaa081908b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.364 [INFO][5366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.364 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" iface="eth0" netns="" Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.364 [INFO][5366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.364 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.382 [INFO][5373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.382 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.382 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.387 [WARNING][5373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.387 [INFO][5373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" HandleID="k8s-pod-network.9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Workload="localhost-k8s-calico--apiserver--87f858bdd--ssrdn-eth0" Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.388 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.392520 containerd[1470]: 2024-12-13 01:09:38.390 [INFO][5366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c" Dec 13 01:09:38.392954 containerd[1470]: time="2024-12-13T01:09:38.392564919Z" level=info msg="TearDown network for sandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\" successfully" Dec 13 01:09:38.396834 containerd[1470]: time="2024-12-13T01:09:38.396796547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:09:38.396889 containerd[1470]: time="2024-12-13T01:09:38.396854334Z" level=info msg="RemovePodSandbox \"9105a85ae88bec0909d7924a5ea410e105dd8756256838ac9d0ace538a1db69c\" returns successfully" Dec 13 01:09:38.397464 containerd[1470]: time="2024-12-13T01:09:38.397403042Z" level=info msg="StopPodSandbox for \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\"" Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.430 [WARNING][5396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9ca75959-8db7-4b67-a9a1-33128730b6d4", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14", Pod:"coredns-7db6d8ff4d-hrntc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3db58d90c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.430 [INFO][5396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.430 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" iface="eth0" netns="" Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.430 [INFO][5396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.430 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.449 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.449 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.449 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.453 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.453 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.454 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.458982 containerd[1470]: 2024-12-13 01:09:38.456 [INFO][5396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:38.459417 containerd[1470]: time="2024-12-13T01:09:38.459035603Z" level=info msg="TearDown network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\" successfully" Dec 13 01:09:38.459417 containerd[1470]: time="2024-12-13T01:09:38.459062012Z" level=info msg="StopPodSandbox for \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\" returns successfully" Dec 13 01:09:38.459644 containerd[1470]: time="2024-12-13T01:09:38.459617002Z" level=info msg="RemovePodSandbox for \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\"" Dec 13 01:09:38.459677 containerd[1470]: time="2024-12-13T01:09:38.459655242Z" level=info msg="Forcibly stopping sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\"" Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.491 [WARNING][5426] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9ca75959-8db7-4b67-a9a1-33128730b6d4", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f4c0eb08cdb3438008b37f1cca6172b779f6c5ab666351d327823eac8396d14", Pod:"coredns-7db6d8ff4d-hrntc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3db58d90c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.492 [INFO][5426] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.492 [INFO][5426] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" iface="eth0" netns="" Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.492 [INFO][5426] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.492 [INFO][5426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.511 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.511 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.511 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.516 [WARNING][5433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.516 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" HandleID="k8s-pod-network.499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Workload="localhost-k8s-coredns--7db6d8ff4d--hrntc-eth0" Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.517 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.522467 containerd[1470]: 2024-12-13 01:09:38.520 [INFO][5426] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d" Dec 13 01:09:38.522467 containerd[1470]: time="2024-12-13T01:09:38.522406138Z" level=info msg="TearDown network for sandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\" successfully" Dec 13 01:09:38.526384 containerd[1470]: time="2024-12-13T01:09:38.526358397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:09:38.526486 containerd[1470]: time="2024-12-13T01:09:38.526402559Z" level=info msg="RemovePodSandbox \"499b0518854e07ee6f0bbc8903f01373dcf6f1df2e5af0a1f142fae397364b9d\" returns successfully" Dec 13 01:09:38.526875 containerd[1470]: time="2024-12-13T01:09:38.526843908Z" level=info msg="StopPodSandbox for \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\"" Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.558 [WARNING][5456] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0", GenerateName:"calico-apiserver-87f858bdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"02b88a36-d2b9-4dbc-acd0-c7e3095fe180", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f858bdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009", Pod:"calico-apiserver-87f858bdd-n5nxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d98322a22f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.558 [INFO][5456] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.558 [INFO][5456] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" iface="eth0" netns="" Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.558 [INFO][5456] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.558 [INFO][5456] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.578 [INFO][5464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.578 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.578 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.582 [WARNING][5464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.583 [INFO][5464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.584 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.588545 containerd[1470]: 2024-12-13 01:09:38.586 [INFO][5456] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:38.588945 containerd[1470]: time="2024-12-13T01:09:38.588588347Z" level=info msg="TearDown network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\" successfully" Dec 13 01:09:38.588945 containerd[1470]: time="2024-12-13T01:09:38.588619174Z" level=info msg="StopPodSandbox for \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\" returns successfully" Dec 13 01:09:38.589211 containerd[1470]: time="2024-12-13T01:09:38.589181867Z" level=info msg="RemovePodSandbox for \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\"" Dec 13 01:09:38.589241 containerd[1470]: time="2024-12-13T01:09:38.589217935Z" level=info msg="Forcibly stopping sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\"" Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.620 [WARNING][5486] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0", GenerateName:"calico-apiserver-87f858bdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"02b88a36-d2b9-4dbc-acd0-c7e3095fe180", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f858bdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7848f7313b0117ff6c4bf4856fe663298dc0216b73f0358557e56d8e9492e009", Pod:"calico-apiserver-87f858bdd-n5nxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d98322a22f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.620 [INFO][5486] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.620 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" iface="eth0" netns="" Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.620 [INFO][5486] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.621 [INFO][5486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.640 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.640 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.640 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.645 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.645 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" HandleID="k8s-pod-network.7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Workload="localhost-k8s-calico--apiserver--87f858bdd--n5nxj-eth0" Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.646 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:09:38.650597 containerd[1470]: 2024-12-13 01:09:38.648 [INFO][5486] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb" Dec 13 01:09:38.650999 containerd[1470]: time="2024-12-13T01:09:38.650636990Z" level=info msg="TearDown network for sandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\" successfully" Dec 13 01:09:38.654534 containerd[1470]: time="2024-12-13T01:09:38.654495496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:09:38.654583 containerd[1470]: time="2024-12-13T01:09:38.654554024Z" level=info msg="RemovePodSandbox \"7779cdd32f83c0a61c30100ae9083b2381246501fbf6b87444746b5cd0b7c1eb\" returns successfully" Dec 13 01:09:41.398620 kubelet[2604]: E1213 01:09:41.398567 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:41.410140 kubelet[2604]: I1213 01:09:41.409763 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ll64m" podStartSLOduration=35.593942409 podStartE2EDuration="42.409744035s" podCreationTimestamp="2024-12-13 01:08:59 +0000 UTC" firstStartedPulling="2024-12-13 01:09:30.190467045 +0000 UTC m=+52.526143981" lastFinishedPulling="2024-12-13 01:09:37.006268671 +0000 UTC m=+59.341945607" observedRunningTime="2024-12-13 01:09:38.072288226 +0000 UTC m=+60.407965182" watchObservedRunningTime="2024-12-13 01:09:41.409744035 +0000 UTC m=+63.745420971" Dec 13 01:09:42.881197 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:49794.service - OpenSSH per-connection server daemon (10.0.0.1:49794). Dec 13 01:09:42.952359 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 49794 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:42.953854 sshd[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:42.957716 systemd-logind[1448]: New session 18 of user core. Dec 13 01:09:42.965553 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:09:43.077755 sshd[5531]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:43.081659 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:49794.service: Deactivated successfully. Dec 13 01:09:43.083725 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:09:43.084316 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:09:43.085340 systemd-logind[1448]: Removed session 18. Dec 13 01:09:48.090603 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:46598.service - OpenSSH per-connection server daemon (10.0.0.1:46598). Dec 13 01:09:48.123527 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 46598 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:48.125119 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:48.129162 systemd-logind[1448]: New session 19 of user core. Dec 13 01:09:48.137558 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:09:48.243052 sshd[5565]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:48.251081 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:46598.service: Deactivated successfully. Dec 13 01:09:48.252689 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:09:48.254393 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:09:48.268957 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:46614.service - OpenSSH per-connection server daemon (10.0.0.1:46614). Dec 13 01:09:48.270000 systemd-logind[1448]: Removed session 19. Dec 13 01:09:48.296123 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 46614 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:48.297580 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:48.301817 systemd-logind[1448]: New session 20 of user core. Dec 13 01:09:48.311568 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:09:48.578721 sshd[5580]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:48.597403 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:46614.service: Deactivated successfully. Dec 13 01:09:48.599132 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:09:48.600480 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:09:48.609651 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:46624.service - OpenSSH per-connection server daemon (10.0.0.1:46624). Dec 13 01:09:48.610544 systemd-logind[1448]: Removed session 20. Dec 13 01:09:48.639916 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 46624 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:48.641539 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:48.645907 systemd-logind[1448]: New session 21 of user core. Dec 13 01:09:48.653559 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:09:50.168646 sshd[5593]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:50.180470 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:46624.service: Deactivated successfully. Dec 13 01:09:50.182212 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:09:50.184047 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:09:50.193810 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:46636.service - OpenSSH per-connection server daemon (10.0.0.1:46636). Dec 13 01:09:50.197180 systemd-logind[1448]: Removed session 21. Dec 13 01:09:50.224970 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 46636 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:50.229423 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:50.236993 systemd-logind[1448]: New session 22 of user core. Dec 13 01:09:50.245423 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:09:50.475181 sshd[5613]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:50.486987 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:46636.service: Deactivated successfully. Dec 13 01:09:50.489024 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:09:50.490778 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:09:50.496264 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:46652.service - OpenSSH per-connection server daemon (10.0.0.1:46652). Dec 13 01:09:50.497695 systemd-logind[1448]: Removed session 22. Dec 13 01:09:50.527228 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 46652 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:50.528490 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:50.532741 systemd-logind[1448]: New session 23 of user core. Dec 13 01:09:50.539681 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:09:50.652045 sshd[5625]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:50.656479 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:46652.service: Deactivated successfully. Dec 13 01:09:50.658955 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:09:50.661781 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:09:50.663283 systemd-logind[1448]: Removed session 23. Dec 13 01:09:55.669495 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:46664.service - OpenSSH per-connection server daemon (10.0.0.1:46664). Dec 13 01:09:55.700953 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 46664 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:55.702358 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:55.705907 systemd-logind[1448]: New session 24 of user core. Dec 13 01:09:55.712574 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:09:55.823851 sshd[5646]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:55.828150 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:46664.service: Deactivated successfully. Dec 13 01:09:55.830528 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:09:55.831257 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:09:55.832242 systemd-logind[1448]: Removed session 24. Dec 13 01:09:58.836490 kubelet[2604]: I1213 01:09:58.836393 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:10:00.836065 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:52364.service - OpenSSH per-connection server daemon (10.0.0.1:52364). Dec 13 01:10:00.867244 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 52364 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:00.868919 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:00.873448 systemd-logind[1448]: New session 25 of user core. Dec 13 01:10:00.881582 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:10:00.983798 sshd[5664]: pam_unix(sshd:session): session closed for user core Dec 13 01:10:00.988135 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:52364.service: Deactivated successfully. Dec 13 01:10:00.990359 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:10:00.991116 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:10:00.992151 systemd-logind[1448]: Removed session 25. Dec 13 01:10:06.001337 systemd[1]: Started sshd@25-10.0.0.52:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Dec 13 01:10:06.034212 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:06.037514 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:06.042456 systemd-logind[1448]: New session 26 of user core. Dec 13 01:10:06.057614 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:10:06.172944 sshd[5684]: pam_unix(sshd:session): session closed for user core Dec 13 01:10:06.177064 systemd[1]: sshd@25-10.0.0.52:22-10.0.0.1:52378.service: Deactivated successfully. Dec 13 01:10:06.178985 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:10:06.179611 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:10:06.180835 systemd-logind[1448]: Removed session 26. Dec 13 01:10:09.858130 kubelet[2604]: E1213 01:10:09.858086 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:11.184133 systemd[1]: Started sshd@26-10.0.0.52:22-10.0.0.1:47876.service - OpenSSH per-connection server daemon (10.0.0.1:47876). Dec 13 01:10:11.218791 sshd[5700]: Accepted publickey for core from 10.0.0.1 port 47876 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:11.220802 sshd[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:11.225037 systemd-logind[1448]: New session 27 of user core. Dec 13 01:10:11.235711 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:10:11.343655 sshd[5700]: pam_unix(sshd:session): session closed for user core Dec 13 01:10:11.353857 systemd[1]: sshd@26-10.0.0.52:22-10.0.0.1:47876.service: Deactivated successfully. Dec 13 01:10:11.356078 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:10:11.358115 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:10:11.359549 systemd-logind[1448]: Removed session 27.