Dec 13 01:10:11.874223 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:10:11.874247 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:10:11.874258 kernel: BIOS-provided physical RAM map: Dec 13 01:10:11.874265 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:10:11.874271 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:10:11.874277 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:10:11.874284 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:10:11.874290 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:10:11.874297 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:10:11.874305 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:10:11.874311 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:10:11.874318 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:10:11.874324 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:10:11.874330 kernel: NX (Execute Disable) protection: active Dec 13 01:10:11.874338 kernel: APIC: Static calls initialized Dec 13 01:10:11.874347 kernel: SMBIOS 2.8 present. Dec 13 01:10:11.874353 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:10:11.874360 kernel: Hypervisor detected: KVM Dec 13 01:10:11.874367 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:10:11.874373 kernel: kvm-clock: using sched offset of 2143119845 cycles Dec 13 01:10:11.874380 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:10:11.874388 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:10:11.874395 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:10:11.874402 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:10:11.874409 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:10:11.874418 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:10:11.874425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:10:11.874432 kernel: Using GB pages for direct mapping Dec 13 01:10:11.874439 kernel: ACPI: Early table checksum verification disabled Dec 13 01:10:11.874445 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:10:11.874452 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:10:11.874494 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:10:11.874501 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:10:11.874511 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:10:11.874518 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:10:11.874525 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:10:11.874532 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:10:11.874539 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:10:11.874546 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:10:11.874553 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:10:11.874563 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:10:11.874572 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:10:11.874579 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:10:11.874587 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:10:11.874594 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:10:11.874601 kernel: No NUMA configuration found Dec 13 01:10:11.874608 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:10:11.874615 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:10:11.874625 kernel: Zone ranges: Dec 13 01:10:11.874632 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:10:11.874639 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:10:11.874646 kernel: Normal empty Dec 13 01:10:11.874653 kernel: Movable zone start for each node Dec 13 01:10:11.874661 kernel: Early memory node ranges Dec 13 01:10:11.874668 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:10:11.874675 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:10:11.874682 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:10:11.874692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:10:11.874699 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:10:11.874706 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:10:11.874713 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:10:11.874720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:10:11.874728 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:10:11.874735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:10:11.874742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:10:11.874749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:10:11.874759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:10:11.874766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:10:11.874773 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:10:11.874781 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:10:11.874790 kernel: TSC deadline timer available Dec 13 01:10:11.874798 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:10:11.874806 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:10:11.874814 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:10:11.874822 kernel: kvm-guest: setup PV sched yield Dec 13 01:10:11.874829 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:10:11.874838 kernel: Booting paravirtualized kernel on KVM Dec 13 01:10:11.874846 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:10:11.874853 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:10:11.874860 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:10:11.874868 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:10:11.874875 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:10:11.874882 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:10:11.874889 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:10:11.874897 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:10:11.874907 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:10:11.874914 kernel: random: crng init done Dec 13 01:10:11.874922 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:10:11.874929 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:10:11.874936 kernel: Fallback order for Node 0: 0 Dec 13 01:10:11.874944 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:10:11.874951 kernel: Policy zone: DMA32 Dec 13 01:10:11.874958 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:10:11.874968 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:10:11.874975 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:10:11.874983 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:10:11.874990 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:10:11.874997 kernel: Dynamic Preempt: voluntary Dec 13 01:10:11.875013 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:10:11.875021 kernel: rcu: RCU event tracing is enabled. Dec 13 01:10:11.875029 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:10:11.875037 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:10:11.875046 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:10:11.875054 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:10:11.875061 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:10:11.875068 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:10:11.875076 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:10:11.875083 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:10:11.875090 kernel: Console: colour VGA+ 80x25 Dec 13 01:10:11.875097 kernel: printk: console [ttyS0] enabled Dec 13 01:10:11.875104 kernel: ACPI: Core revision 20230628 Dec 13 01:10:11.875114 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:10:11.875121 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:10:11.875128 kernel: x2apic enabled Dec 13 01:10:11.875136 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:10:11.875143 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:10:11.875150 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:10:11.875158 kernel: kvm-guest: setup PV IPIs Dec 13 01:10:11.875174 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:10:11.875181 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:10:11.875189 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:10:11.875197 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:10:11.875204 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:10:11.875214 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:10:11.875221 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:10:11.875229 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:10:11.875237 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:10:11.875244 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:10:11.875254 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:10:11.875261 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:10:11.875269 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:10:11.875277 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:10:11.875284 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:10:11.875293 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:10:11.875300 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:10:11.875308 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:10:11.875318 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:10:11.875325 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:10:11.875333 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:10:11.875341 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:10:11.875348 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:10:11.875356 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:10:11.875364 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:10:11.875371 kernel: landlock: Up and running. Dec 13 01:10:11.875379 kernel: SELinux: Initializing. Dec 13 01:10:11.875400 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:10:11.875408 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:10:11.875416 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:10:11.875431 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:10:11.875441 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:10:11.875474 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:10:11.875490 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:10:11.875498 kernel: ... version: 0 Dec 13 01:10:11.875530 kernel: ... bit width: 48 Dec 13 01:10:11.875546 kernel: ... generic registers: 6 Dec 13 01:10:11.875554 kernel: ... value mask: 0000ffffffffffff Dec 13 01:10:11.875577 kernel: ... max period: 00007fffffffffff Dec 13 01:10:11.875584 kernel: ... fixed-purpose events: 0 Dec 13 01:10:11.875592 kernel: ... event mask: 000000000000003f Dec 13 01:10:11.875600 kernel: signal: max sigframe size: 1776 Dec 13 01:10:11.875607 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:10:11.875615 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:10:11.875622 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:10:11.875632 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:10:11.875640 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:10:11.875647 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:10:11.875655 kernel: smpboot: Max logical packages: 1 Dec 13 01:10:11.875663 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:10:11.875670 kernel: devtmpfs: initialized Dec 13 01:10:11.875678 kernel: x86/mm: Memory block size: 128MB Dec 13 01:10:11.875685 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:10:11.875693 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:10:11.875703 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:10:11.875710 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:10:11.875718 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:10:11.875726 kernel: audit: type=2000 audit(1734052212.218:1): state=initialized audit_enabled=0 res=1 Dec 13 01:10:11.875733 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:10:11.875741 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:10:11.875748 kernel: cpuidle: using governor menu Dec 13 01:10:11.875756 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:10:11.875763 kernel: dca service started, version 1.12.1 Dec 13 01:10:11.875773 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:10:11.875781 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:10:11.875788 kernel: PCI: Using configuration type 1 for base access Dec 13 01:10:11.875796 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:10:11.875804 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:10:11.875811 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:10:11.875819 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:10:11.875826 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:10:11.875834 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:10:11.875843 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:10:11.875851 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:10:11.875858 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:10:11.875866 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:10:11.875874 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:10:11.875881 kernel: ACPI: Interpreter enabled Dec 13 01:10:11.875888 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:10:11.875896 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:10:11.875904 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:10:11.875913 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:10:11.875921 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:10:11.875929 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:10:11.876117 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:10:11.876246 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:10:11.876368 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:10:11.876379 kernel: PCI host bridge to bus 0000:00 Dec 13 01:10:11.876525 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:10:11.876638 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:10:11.876789 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:10:11.876956 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:10:11.877126 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:10:11.877260 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:10:11.877373 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:10:11.877540 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:10:11.877672 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:10:11.877793 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:10:11.877912 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:10:11.878039 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:10:11.878159 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:10:11.878326 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:10:11.878486 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:10:11.878609 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:10:11.878730 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:10:11.878857 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:10:11.878977 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:10:11.879106 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:10:11.879260 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:10:11.879396 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:10:11.879540 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:10:11.879660 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:10:11.879779 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:10:11.879897 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:10:11.880031 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:10:11.880155 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:10:11.880280 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:10:11.880397 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:10:11.880538 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:10:11.880666 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:10:11.880785 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:10:11.880795 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:10:11.880807 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:10:11.880815 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:10:11.880823 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:10:11.880830 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:10:11.880838 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:10:11.880846 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:10:11.880853 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:10:11.880862 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:10:11.880872 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:10:11.880885 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:10:11.880896 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:10:11.880906 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:10:11.880916 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:10:11.880926 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:10:11.880934 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:10:11.880941 kernel: iommu: Default domain type: Translated Dec 13 01:10:11.880949 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:10:11.880956 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:10:11.880967 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:10:11.880974 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:10:11.880982 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:10:11.881117 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:10:11.881237 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:10:11.881355 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:10:11.881365 kernel: vgaarb: loaded Dec 13 01:10:11.881373 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:10:11.881384 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:10:11.881392 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:10:11.881399 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:10:11.881407 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:10:11.881415 kernel: pnp: PnP ACPI init Dec 13 01:10:11.881586 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:10:11.881598 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:10:11.881606 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:10:11.881618 kernel: NET: Registered PF_INET protocol family Dec 13 01:10:11.881626 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:10:11.881634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:10:11.881642 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:10:11.881650 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:10:11.881657 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:10:11.881665 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:10:11.881673 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:10:11.881681 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:10:11.881690 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:10:11.881698 kernel: NET: Registered PF_XDP protocol family Dec 13 01:10:11.881811 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:10:11.881921 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:10:11.882042 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:10:11.882152 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:10:11.882261 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:10:11.882370 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:10:11.882384 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:10:11.882392 kernel: Initialise system trusted keyrings Dec 13 01:10:11.882399 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:10:11.882407 kernel: Key type asymmetric registered Dec 13 01:10:11.882415 kernel: Asymmetric key parser 'x509' registered Dec 13 01:10:11.882422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:10:11.882430 kernel: io scheduler mq-deadline registered Dec 13 01:10:11.882437 kernel: io scheduler kyber registered Dec 13 01:10:11.882445 kernel: io scheduler bfq registered Dec 13 01:10:11.882473 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:10:11.882482 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:10:11.882490 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:10:11.882497 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:10:11.882505 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:10:11.882513 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:10:11.882520 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:10:11.882528 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:10:11.882536 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:10:11.882663 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:10:11.882678 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:10:11.882791 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:10:11.882905 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:10:11 UTC (1734052211) Dec 13 01:10:11.883026 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:10:11.883036 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:10:11.883044 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:10:11.883052 kernel: Segment Routing with IPv6 Dec 13 01:10:11.883062 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:10:11.883070 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:10:11.883078 kernel: Key type dns_resolver registered Dec 13 01:10:11.883085 kernel: IPI shorthand broadcast: enabled Dec 13 01:10:11.883093 kernel: sched_clock: Marking stable (544001924, 105786535)->(694277869, -44489410) Dec 13 01:10:11.883101 kernel: registered taskstats version 1 Dec 13 01:10:11.883108 kernel: Loading compiled-in X.509 certificates Dec 13 01:10:11.883116 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:10:11.883124 kernel: Key type .fscrypt registered Dec 13 01:10:11.883133 kernel: Key type fscrypt-provisioning registered Dec 13 01:10:11.883141 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:10:11.883148 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:10:11.883156 kernel: ima: No architecture policies found Dec 13 01:10:11.883164 kernel: clk: Disabling unused clocks Dec 13 01:10:11.883171 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:10:11.883179 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:10:11.883186 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:10:11.883194 kernel: Run /init as init process Dec 13 01:10:11.883204 kernel: with arguments: Dec 13 01:10:11.883212 kernel: /init Dec 13 01:10:11.883219 kernel: with environment: Dec 13 01:10:11.883227 kernel: HOME=/ Dec 13 01:10:11.883234 kernel: TERM=linux Dec 13 01:10:11.883241 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:10:11.883251 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:10:11.883260 systemd[1]: Detected virtualization kvm. Dec 13 01:10:11.883271 systemd[1]: Detected architecture x86-64. Dec 13 01:10:11.883279 systemd[1]: Running in initrd. Dec 13 01:10:11.883287 systemd[1]: No hostname configured, using default hostname. Dec 13 01:10:11.883295 systemd[1]: Hostname set to . Dec 13 01:10:11.883303 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:10:11.883311 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:10:11.883319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:10:11.883327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:10:11.883339 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:10:11.883358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:10:11.883368 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:10:11.883377 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:10:11.883387 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:10:11.883398 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:10:11.883406 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:10:11.883415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:10:11.883423 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:10:11.883432 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:10:11.883440 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:10:11.883448 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:10:11.883540 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:10:11.883552 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:10:11.883560 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:10:11.883569 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:10:11.883577 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:10:11.883585 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:10:11.883593 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:10:11.883602 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:10:11.883610 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:10:11.883618 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:10:11.883629 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:10:11.883637 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:10:11.883645 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:10:11.883653 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:10:11.883661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:10:11.883669 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:10:11.883680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:10:11.883688 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:10:11.883716 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:10:11.883737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:10:11.883747 systemd-journald[193]: Journal started Dec 13 01:10:11.883767 systemd-journald[193]: Runtime Journal (/run/log/journal/9315295061db43fa85e3a27bce8e2393) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:10:11.877202 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:10:11.912651 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:10:11.912667 kernel: Bridge firewalling registered Dec 13 01:10:11.903680 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:10:11.915518 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:10:11.915923 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:10:11.918306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:10:11.920784 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:10:11.937662 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:10:11.940797 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:10:11.943376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:10:11.947120 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:10:11.957882 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:10:11.959774 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:10:11.962038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:10:11.975590 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:10:11.976961 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:10:11.981853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:10:11.988349 dracut-cmdline[228]: dracut-dracut-053 Dec 13 01:10:11.991809 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:10:12.028988 systemd-resolved[235]: Positive Trust Anchors: Dec 13 01:10:12.029015 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:10:12.029046 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:10:12.031583 systemd-resolved[235]: Defaulting to hostname 'linux'. Dec 13 01:10:12.032702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:10:12.038901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:10:12.098500 kernel: SCSI subsystem initialized Dec 13 01:10:12.107482 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:10:12.118500 kernel: iscsi: registered transport (tcp) Dec 13 01:10:12.140495 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:10:12.140544 kernel: QLogic iSCSI HBA Driver Dec 13 01:10:12.195647 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:10:12.207993 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:10:12.247238 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:10:12.247288 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:10:12.247301 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:10:12.290496 kernel: raid6: avx2x4 gen() 29293 MB/s Dec 13 01:10:12.307507 kernel: raid6: avx2x2 gen() 28580 MB/s Dec 13 01:10:12.324606 kernel: raid6: avx2x1 gen() 24446 MB/s Dec 13 01:10:12.324680 kernel: raid6: using algorithm avx2x4 gen() 29293 MB/s Dec 13 01:10:12.342595 kernel: raid6: .... xor() 6755 MB/s, rmw enabled Dec 13 01:10:12.342672 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:10:12.363495 kernel: xor: automatically using best checksumming function avx Dec 13 01:10:12.516490 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:10:12.529832 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:10:12.552642 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:10:12.568234 systemd-udevd[415]: Using default interface naming scheme 'v255'. Dec 13 01:10:12.573874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:10:12.589591 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:10:12.607851 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Dec 13 01:10:12.644721 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:10:12.657780 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:10:12.720813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:10:12.734602 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:10:12.750062 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:10:12.753855 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:10:12.756696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:10:12.759375 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:10:12.761475 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:10:12.791926 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:10:12.791955 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:10:12.792192 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:10:12.792211 kernel: GPT:9289727 != 19775487 Dec 13 01:10:12.792233 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:10:12.792248 kernel: GPT:9289727 != 19775487 Dec 13 01:10:12.792261 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:10:12.792273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:10:12.781900 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:10:12.796560 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:10:12.796630 kernel: AES CTR mode by8 optimization enabled Dec 13 01:10:12.797860 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:10:12.798024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:10:12.802683 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:10:12.804142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:10:12.804449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:10:12.822864 kernel: libata version 3.00 loaded. Dec 13 01:10:12.806554 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:10:12.829379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:10:12.834481 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (463) Dec 13 01:10:12.829969 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:10:12.841511 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:10:12.859960 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:10:12.859977 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Dec 13 01:10:12.859988 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:10:12.860155 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:10:12.861640 kernel: scsi host0: ahci Dec 13 01:10:12.861877 kernel: scsi host1: ahci Dec 13 01:10:12.862045 kernel: scsi host2: ahci Dec 13 01:10:12.862200 kernel: scsi host3: ahci Dec 13 01:10:12.862343 kernel: scsi host4: ahci Dec 13 01:10:12.862516 kernel: scsi host5: ahci Dec 13 01:10:12.862664 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:10:12.862676 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:10:12.862686 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:10:12.862696 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:10:12.862708 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:10:12.862718 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:10:12.871899 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:10:12.899245 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:10:12.905903 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:10:12.910607 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:10:12.920109 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:10:12.928310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:10:12.948746 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:10:12.952588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:10:12.973383 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:10:13.078636 disk-uuid[566]: Primary Header is updated. Dec 13 01:10:13.078636 disk-uuid[566]: Secondary Entries is updated. Dec 13 01:10:13.078636 disk-uuid[566]: Secondary Header is updated. Dec 13 01:10:13.083488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:10:13.088482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:10:13.169488 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:10:13.176091 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:10:13.176170 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:10:13.176479 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:10:13.177482 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:10:13.178501 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:10:13.178582 kernel: ata3.00: applying bridge limits Dec 13 01:10:13.179484 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:10:13.180517 kernel: ata3.00: configured for UDMA/100 Dec 13 01:10:13.182503 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:10:13.224507 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:10:13.242252 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:10:13.242273 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:10:14.090240 disk-uuid[577]: The operation has completed successfully. Dec 13 01:10:14.091813 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:10:14.119519 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:10:14.119644 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:10:14.144745 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:10:14.148799 sh[592]: Success Dec 13 01:10:14.163491 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:10:14.199788 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:10:14.220451 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:10:14.224015 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:10:14.238809 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:10:14.238860 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:10:14.238871 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:10:14.240902 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:10:14.240927 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:10:14.245878 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:10:14.246722 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:10:14.255677 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:10:14.256718 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:10:14.270892 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:10:14.270929 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:10:14.270940 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:10:14.273476 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:10:14.282932 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:10:14.284884 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:10:14.294304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:10:14.301650 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:10:14.354579 ignition[689]: Ignition 2.19.0 Dec 13 01:10:14.354590 ignition[689]: Stage: fetch-offline Dec 13 01:10:14.354626 ignition[689]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:10:14.354637 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:10:14.354731 ignition[689]: parsed url from cmdline: "" Dec 13 01:10:14.354735 ignition[689]: no config URL provided Dec 13 01:10:14.354740 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:10:14.354750 ignition[689]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:10:14.354776 ignition[689]: op(1): [started] loading QEMU firmware config module Dec 13 01:10:14.354782 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:10:14.363551 ignition[689]: op(1): [finished] loading QEMU firmware config module Dec 13 01:10:14.372620 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:10:14.386672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:10:14.409873 systemd-networkd[781]: lo: Link UP Dec 13 01:10:14.409884 systemd-networkd[781]: lo: Gained carrier Dec 13 01:10:14.413138 ignition[689]: parsing config with SHA512: 51bd51a4faf49ac938f2365b0634718e4264761f17ff8f7c8538dea23bdde53b6719a330e2bf4e62c0ec794ef8a3e4c7aeb425705e30d9901e1b6222c5747ad0 Dec 13 01:10:14.413335 systemd-networkd[781]: Enumeration completed Dec 13 01:10:14.413558 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:10:14.413730 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:10:14.413734 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:10:14.418534 ignition[689]: fetch-offline: fetch-offline passed Dec 13 01:10:14.414527 systemd-networkd[781]: eth0: Link UP Dec 13 01:10:14.418596 ignition[689]: Ignition finished successfully Dec 13 01:10:14.414530 systemd-networkd[781]: eth0: Gained carrier Dec 13 01:10:14.414537 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:10:14.415965 systemd[1]: Reached target network.target - Network. Dec 13 01:10:14.418101 unknown[689]: fetched base config from "system" Dec 13 01:10:14.418118 unknown[689]: fetched user config from "qemu" Dec 13 01:10:14.421066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:10:14.423687 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:10:14.426671 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:10:14.429607 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:10:14.443828 ignition[784]: Ignition 2.19.0 Dec 13 01:10:14.443838 ignition[784]: Stage: kargs Dec 13 01:10:14.444021 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:10:14.444032 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:10:14.444820 ignition[784]: kargs: kargs passed Dec 13 01:10:14.444862 ignition[784]: Ignition finished successfully Dec 13 01:10:14.448330 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:10:14.465591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:10:14.478799 ignition[793]: Ignition 2.19.0 Dec 13 01:10:14.478812 ignition[793]: Stage: disks Dec 13 01:10:14.479022 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:10:14.479034 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:10:14.483204 ignition[793]: disks: disks passed Dec 13 01:10:14.483896 ignition[793]: Ignition finished successfully Dec 13 01:10:14.486852 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:10:14.488101 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:10:14.490101 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:10:14.490309 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:10:14.490647 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:10:14.491102 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:10:14.516715 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:10:14.529466 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:10:14.536297 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:10:14.554544 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:10:14.640487 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:10:14.640886 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:10:14.641656 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:10:14.655636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:10:14.657672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:10:14.659111 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:10:14.664183 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Dec 13 01:10:14.659156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:10:14.669913 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:10:14.669945 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:10:14.669959 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:10:14.669983 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:10:14.659181 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:10:14.671993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:10:14.690880 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:10:14.691994 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:10:14.728964 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:10:14.734753 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:10:14.740533 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:10:14.746019 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:10:14.838197 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:10:14.850599 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:10:14.853959 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:10:14.859511 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:10:14.880505 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:10:14.885112 ignition[924]: INFO : Ignition 2.19.0 Dec 13 01:10:14.885112 ignition[924]: INFO : Stage: mount Dec 13 01:10:14.887366 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:10:14.887366 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:10:14.887366 ignition[924]: INFO : mount: mount passed Dec 13 01:10:14.887366 ignition[924]: INFO : Ignition finished successfully Dec 13 01:10:14.888841 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:10:14.900569 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:10:15.238486 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:10:15.251745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:10:15.261053 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Dec 13 01:10:15.261088 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:10:15.261109 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:10:15.262524 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:10:15.265473 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:10:15.266600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:10:15.284941 ignition[955]: INFO : Ignition 2.19.0 Dec 13 01:10:15.284941 ignition[955]: INFO : Stage: files Dec 13 01:10:15.286760 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:10:15.286760 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:10:15.289427 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:10:15.290736 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:10:15.290736 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:10:15.295274 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:10:15.296765 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:10:15.298476 unknown[955]: wrote ssh authorized keys file for user: core Dec 13 01:10:15.299632 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:10:15.301346 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:10:15.303301 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:10:15.303301 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:10:15.303301 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:10:15.340758 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:10:15.443134 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:10:15.443134 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:10:15.446863 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:10:15.448522 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:10:15.450479 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:10:15.452130 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:10:15.453846 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:10:15.455528 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:10:15.457245 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:10:15.459132 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:10:15.460970 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:10:15.462708 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:10:15.465232 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:10:15.467634 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:10:15.469699 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:10:15.969382 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:10:16.153436 systemd-networkd[781]: eth0: Gained IPv6LL Dec 13 01:10:16.251668 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:10:16.251668 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 01:10:16.255350 ignition[955]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:10:16.277723 ignition[955]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:10:16.282011 ignition[955]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:10:16.283597 ignition[955]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:10:16.283597 ignition[955]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:10:16.283597 ignition[955]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:10:16.283597 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:10:16.283597 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:10:16.283597 ignition[955]: INFO : files: files passed Dec 13 01:10:16.283597 ignition[955]: INFO : Ignition finished successfully Dec 13 01:10:16.285266 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:10:16.299584 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:10:16.302278 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:10:16.304184 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:10:16.304290 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:10:16.312198 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:10:16.314745 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:10:16.316391 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:10:16.317933 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:10:16.317625 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:10:16.319729 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:10:16.335611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:10:16.358540 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:10:16.358671 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:10:16.360981 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:10:16.363059 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:10:16.365100 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:10:16.365806 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:10:16.383011 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:10:16.393656 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:10:16.402134 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:10:16.403425 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:10:16.405698 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:10:16.407720 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:10:16.407831 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:10:16.410203 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:10:16.411828 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:10:16.414161 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:10:16.416525 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:10:16.418604 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:10:16.420806 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:10:16.422964 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:10:16.425276 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:10:16.427293 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:10:16.429629 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:10:16.431429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:10:16.431545 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:10:16.433888 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:10:16.435348 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:10:16.437452 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:10:16.437593 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:10:16.439698 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:10:16.439806 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:10:16.442217 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:10:16.442322 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:10:16.444197 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:10:16.445964 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:10:16.450536 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:10:16.452842 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:10:16.454577 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:10:16.456869 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:10:16.456978 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:10:16.459401 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:10:16.459503 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:10:16.461391 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:10:16.461523 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:10:16.463674 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:10:16.463780 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:10:16.473595 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:10:16.474819 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:10:16.474934 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:10:16.477931 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:10:16.479030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:10:16.479205 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:10:16.486189 ignition[1009]: INFO : Ignition 2.19.0 Dec 13 01:10:16.486189 ignition[1009]: INFO : Stage: umount Dec 13 01:10:16.486189 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:10:16.486189 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:10:16.481730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:10:16.495749 ignition[1009]: INFO : umount: umount passed Dec 13 01:10:16.495749 ignition[1009]: INFO : Ignition finished successfully Dec 13 01:10:16.481844 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:10:16.486983 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:10:16.487090 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:10:16.489042 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:10:16.489143 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:10:16.492534 systemd[1]: Stopped target network.target - Network. Dec 13 01:10:16.493790 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:10:16.493874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:10:16.495764 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:10:16.495823 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:10:16.497595 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:10:16.497653 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:10:16.499520 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:10:16.499580 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:10:16.501743 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:10:16.503924 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:10:16.507073 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:10:16.507529 systemd-networkd[781]: eth0: DHCPv6 lease lost Dec 13 01:10:16.510148 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:10:16.510278 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:10:16.512742 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:10:16.512781 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:10:16.526600 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:10:16.528684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:10:16.528746 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:10:16.531175 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:10:16.533749 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:10:16.533868 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:10:16.539142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:10:16.539244 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:10:16.540559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:10:16.540626 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:10:16.542736 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:10:16.542790 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:10:16.554786 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:10:16.555857 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:10:16.558829 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:10:16.559874 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:10:16.562906 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:10:16.563923 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:10:16.566040 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:10:16.566083 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:10:16.569154 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:10:16.570103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:10:16.572297 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:10:16.572352 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:10:16.575373 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:10:16.575426 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:10:16.591599 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:10:16.593853 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:10:16.594934 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:10:16.619923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:10:16.619984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:10:16.623588 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:10:16.624705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:10:16.672275 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:10:16.673292 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:10:16.675360 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:10:16.677385 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:10:16.678395 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:10:16.691617 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:10:16.700225 systemd[1]: Switching root. Dec 13 01:10:16.734855 systemd-journald[193]: Journal stopped Dec 13 01:10:17.853470 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:10:17.853545 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:10:17.853563 kernel: SELinux: policy capability open_perms=1 Dec 13 01:10:17.853574 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:10:17.853585 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:10:17.853601 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:10:17.853612 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:10:17.853623 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:10:17.853639 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:10:17.853650 kernel: audit: type=1403 audit(1734052217.135:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:10:17.853668 systemd[1]: Successfully loaded SELinux policy in 39.886ms. Dec 13 01:10:17.853686 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.275ms. Dec 13 01:10:17.853699 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:10:17.853712 systemd[1]: Detected virtualization kvm. Dec 13 01:10:17.853725 systemd[1]: Detected architecture x86-64. Dec 13 01:10:17.853737 systemd[1]: Detected first boot. Dec 13 01:10:17.853748 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:10:17.853760 zram_generator::config[1074]: No configuration found. Dec 13 01:10:17.853775 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:10:17.853787 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:10:17.853799 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:10:17.853811 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:10:17.853824 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:10:17.853836 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:10:17.853848 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:10:17.853860 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:10:17.853874 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:10:17.853886 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:10:17.853898 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:10:17.853909 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:10:17.853921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:10:17.853940 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:10:17.853953 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:10:17.853965 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:10:17.853977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:10:17.853992 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:10:17.854004 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:10:17.854016 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:10:17.854028 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:10:17.854040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:10:17.854052 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:10:17.854063 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:10:17.854075 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:10:17.854089 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:10:17.854101 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:10:17.854113 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:10:17.854124 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:10:17.854136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:10:17.854148 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:10:17.854160 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:10:17.854171 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:10:17.854184 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:10:17.854196 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:10:17.854210 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:10:17.854222 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:10:17.854234 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:10:17.854246 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:10:17.854258 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:10:17.854270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:10:17.854282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:10:17.854294 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:10:17.854308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:10:17.854320 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:10:17.854332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:10:17.854343 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:10:17.854355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:10:17.854367 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:10:17.854379 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:10:17.854392 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:10:17.854406 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:10:17.854418 kernel: fuse: init (API version 7.39) Dec 13 01:10:17.854429 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:10:17.854441 kernel: loop: module loaded Dec 13 01:10:17.854452 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:10:17.854476 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:10:17.854487 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:10:17.854499 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:10:17.854531 systemd-journald[1155]: Collecting audit messages is disabled. Dec 13 01:10:17.854555 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:10:17.854567 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:10:17.854579 systemd-journald[1155]: Journal started Dec 13 01:10:17.854601 systemd-journald[1155]: Runtime Journal (/run/log/journal/9315295061db43fa85e3a27bce8e2393) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:10:17.855479 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:10:17.862349 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:10:17.861677 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:10:17.863121 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:10:17.864470 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:10:17.866517 kernel: ACPI: bus type drm_connector registered Dec 13 01:10:17.866468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:10:17.868022 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:10:17.868231 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:10:17.870969 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:10:17.872986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:10:17.873255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:10:17.875100 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:10:17.875313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:10:17.876689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:10:17.876893 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:10:17.878404 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:10:17.878675 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:10:17.880062 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:10:17.880289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:10:17.881768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:10:17.883252 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:10:17.884850 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:10:17.899077 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:10:17.911556 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:10:17.913743 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:10:17.914927 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:10:17.919582 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:10:17.922602 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:10:17.925063 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:10:17.927747 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:10:17.929072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:10:17.931051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:10:17.936055 systemd-journald[1155]: Time spent on flushing to /var/log/journal/9315295061db43fa85e3a27bce8e2393 is 13.702ms for 938 entries. Dec 13 01:10:17.936055 systemd-journald[1155]: System Journal (/var/log/journal/9315295061db43fa85e3a27bce8e2393) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:10:17.969768 systemd-journald[1155]: Received client request to flush runtime journal. Dec 13 01:10:17.935624 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:10:17.941949 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:10:17.942118 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:10:17.948437 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:10:17.962652 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:10:17.964516 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:10:17.966085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:10:17.969830 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:10:17.973414 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:10:17.977725 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Dec 13 01:10:17.977745 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Dec 13 01:10:17.978251 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:10:17.984305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:10:17.990652 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:10:18.014540 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:10:18.028600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:10:18.044181 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Dec 13 01:10:18.044201 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Dec 13 01:10:18.050309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:10:18.464793 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:10:18.479588 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:10:18.502969 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Dec 13 01:10:18.517561 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:10:18.532593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:10:18.546616 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:10:18.554504 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1244) Dec 13 01:10:18.557525 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1244) Dec 13 01:10:18.557310 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:10:18.568540 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1238) Dec 13 01:10:18.598512 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:10:18.605153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:10:18.622538 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:10:18.625473 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:10:18.637486 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:10:18.637506 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:10:18.638171 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:10:18.673621 systemd-networkd[1241]: lo: Link UP Dec 13 01:10:18.676619 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:10:18.673633 systemd-networkd[1241]: lo: Gained carrier Dec 13 01:10:18.675221 systemd-networkd[1241]: Enumeration completed Dec 13 01:10:18.675678 systemd-networkd[1241]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:10:18.675683 systemd-networkd[1241]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:10:18.676359 systemd-networkd[1241]: eth0: Link UP Dec 13 01:10:18.676363 systemd-networkd[1241]: eth0: Gained carrier Dec 13 01:10:18.676373 systemd-networkd[1241]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:10:18.678511 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:10:18.690306 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:10:18.690508 systemd-networkd[1241]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:10:18.694484 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:10:18.695204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:10:18.761929 kernel: kvm_amd: TSC scaling supported Dec 13 01:10:18.761994 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:10:18.762025 kernel: kvm_amd: Nested Paging enabled Dec 13 01:10:18.763095 kernel: kvm_amd: LBR virtualization supported Dec 13 01:10:18.763116 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:10:18.764488 kernel: kvm_amd: Virtual GIF supported Dec 13 01:10:18.783506 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:10:18.811714 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:10:18.826694 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:10:18.828302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:10:18.836133 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:10:18.869729 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:10:18.871220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:10:18.883671 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:10:18.888735 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:10:18.918377 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:10:18.920579 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:10:18.921897 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:10:18.921931 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:10:18.923017 systemd[1]: Reached target machines.target - Containers. Dec 13 01:10:18.925123 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:10:18.938623 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:10:18.941130 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:10:18.942550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:10:18.943640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:10:18.946701 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:10:18.949985 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:10:18.953668 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:10:18.959720 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:10:18.967487 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:10:18.976829 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:10:18.977637 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:10:18.988479 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:10:19.022477 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:10:19.058480 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 01:10:19.089923 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:10:19.099494 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:10:19.109501 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:10:19.115198 (sd-merge)[1305]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:10:19.115762 (sd-merge)[1305]: Merged extensions into '/usr'. Dec 13 01:10:19.120106 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:10:19.120121 systemd[1]: Reloading... Dec 13 01:10:19.166565 zram_generator::config[1333]: No configuration found. Dec 13 01:10:19.186494 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:10:19.300252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:10:19.364513 systemd[1]: Reloading finished in 243 ms. Dec 13 01:10:19.381311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:10:19.382955 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:10:19.399739 systemd[1]: Starting ensure-sysext.service... Dec 13 01:10:19.402164 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:10:19.405469 systemd[1]: Reloading requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:10:19.405482 systemd[1]: Reloading... Dec 13 01:10:19.427007 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:10:19.427378 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:10:19.428383 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:10:19.428708 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Dec 13 01:10:19.428794 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Dec 13 01:10:19.437156 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:10:19.437173 systemd-tmpfiles[1378]: Skipping /boot Dec 13 01:10:19.447987 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:10:19.448000 systemd-tmpfiles[1378]: Skipping /boot Dec 13 01:10:19.448498 zram_generator::config[1406]: No configuration found. Dec 13 01:10:19.567882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:10:19.632180 systemd[1]: Reloading finished in 226 ms. Dec 13 01:10:19.650825 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:10:19.674761 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:10:19.677676 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:10:19.680172 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:10:19.683688 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:10:19.686078 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:10:19.692494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:10:19.692667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:10:19.694449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:10:19.698543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:10:19.705082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:10:19.707507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:10:19.707646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:10:19.708730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:10:19.708950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:10:19.711234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:10:19.711793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:10:19.714152 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:10:19.714412 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:10:19.720769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:10:19.721015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:10:19.726711 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:10:19.733759 augenrules[1483]: No rules Dec 13 01:10:19.732422 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:10:19.734950 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:10:19.740711 systemd[1]: Finished ensure-sysext.service. Dec 13 01:10:19.744000 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:10:19.744168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:10:19.752589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:10:19.755678 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:10:19.759097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:10:19.761773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:10:19.763087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:10:19.766475 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:10:19.771735 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:10:19.774604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:10:19.775572 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:10:19.777336 systemd-resolved[1455]: Positive Trust Anchors: Dec 13 01:10:19.777353 systemd-resolved[1455]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:10:19.777386 systemd-resolved[1455]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:10:19.777676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:10:19.777899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:10:19.779625 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:10:19.779834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:10:19.781308 systemd-resolved[1455]: Defaulting to hostname 'linux'. Dec 13 01:10:19.781310 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:10:19.781536 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:10:19.783227 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:10:19.783473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:10:19.784857 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:10:19.786344 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:10:19.793273 systemd[1]: Reached target network.target - Network. Dec 13 01:10:19.794262 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:10:19.795669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:10:19.795724 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:10:19.795756 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:10:19.856568 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:10:21.154148 systemd-resolved[1455]: Clock change detected. Flushing caches. Dec 13 01:10:21.154182 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:10:21.154221 systemd-timesyncd[1501]: Initial clock synchronization to Fri 2024-12-13 01:10:21.154099 UTC. Dec 13 01:10:21.155187 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:10:21.156352 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:10:21.157625 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:10:21.158866 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:10:21.160120 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:10:21.160151 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:10:21.161051 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:10:21.162206 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:10:21.163427 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:10:21.164659 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:10:21.166377 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:10:21.169250 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:10:21.171485 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:10:21.181747 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:10:21.182871 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:10:21.183832 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:10:21.184903 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:10:21.184937 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:10:21.184956 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:10:21.186099 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:10:21.188221 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:10:21.192562 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:10:21.195469 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:10:21.196542 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:10:21.198673 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:10:21.199119 jq[1519]: false Dec 13 01:10:21.200450 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:10:21.204518 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:10:21.212499 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:10:21.216598 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:10:21.218178 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:10:21.220659 extend-filesystems[1521]: Found loop3 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found loop4 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found loop5 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found sr0 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found vda Dec 13 01:10:21.222450 extend-filesystems[1521]: Found vda1 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found vda2 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found vda3 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found usr Dec 13 01:10:21.222450 extend-filesystems[1521]: Found vda4 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found vda6 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found vda7 Dec 13 01:10:21.222450 extend-filesystems[1521]: Found vda9 Dec 13 01:10:21.222450 extend-filesystems[1521]: Checking size of /dev/vda9 Dec 13 01:10:21.252797 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:10:21.252856 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1254) Dec 13 01:10:21.252872 extend-filesystems[1521]: Resized partition /dev/vda9 Dec 13 01:10:21.223069 dbus-daemon[1518]: [system] SELinux support is enabled Dec 13 01:10:21.235603 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:10:21.265755 extend-filesystems[1544]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:10:21.277427 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:10:21.244452 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:10:21.246791 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:10:21.303648 update_engine[1533]: I20241213 01:10:21.292805 1533 main.cc:92] Flatcar Update Engine starting Dec 13 01:10:21.303648 update_engine[1533]: I20241213 01:10:21.300976 1533 update_check_scheduler.cc:74] Next update check in 3m54s Dec 13 01:10:21.255740 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:10:21.304659 jq[1546]: true Dec 13 01:10:21.304888 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:10:21.304888 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:10:21.304888 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:10:21.256069 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:10:21.311718 extend-filesystems[1521]: Resized filesystem in /dev/vda9 Dec 13 01:10:21.256434 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:10:21.256716 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:10:21.260745 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:10:21.313847 jq[1553]: true Dec 13 01:10:21.261080 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:10:21.280911 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:10:21.306453 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:10:21.307832 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:10:21.323326 systemd-logind[1532]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:10:21.324793 tar[1548]: linux-amd64/helm Dec 13 01:10:21.323355 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:10:21.325898 systemd-logind[1532]: New seat seat0. Dec 13 01:10:21.326064 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:10:21.328552 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:10:21.332190 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:10:21.334536 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:10:21.335918 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:10:21.336022 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:10:21.338065 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:10:21.346557 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:10:21.361334 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:10:21.363299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:10:21.366394 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:10:21.382003 locksmithd[1582]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:10:21.481566 systemd-networkd[1241]: eth0: Gained IPv6LL Dec 13 01:10:21.488927 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:10:21.490883 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:10:21.497391 containerd[1551]: time="2024-12-13T01:10:21.496779781Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:10:21.502672 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:10:21.512721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:10:21.520687 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:10:21.527250 containerd[1551]: time="2024-12-13T01:10:21.527034941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:10:21.528930 containerd[1551]: time="2024-12-13T01:10:21.528893586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:10:21.528930 containerd[1551]: time="2024-12-13T01:10:21.528926017Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:10:21.528993 containerd[1551]: time="2024-12-13T01:10:21.528949070Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:10:21.529303 containerd[1551]: time="2024-12-13T01:10:21.529158082Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:10:21.529303 containerd[1551]: time="2024-12-13T01:10:21.529181717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:10:21.529303 containerd[1551]: time="2024-12-13T01:10:21.529251978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:10:21.529303 containerd[1551]: time="2024-12-13T01:10:21.529264592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:10:21.529586 containerd[1551]: time="2024-12-13T01:10:21.529559084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:10:21.529586 containerd[1551]: time="2024-12-13T01:10:21.529583690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:10:21.529638 containerd[1551]: time="2024-12-13T01:10:21.529597847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:10:21.529638 containerd[1551]: time="2024-12-13T01:10:21.529607986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:10:21.529923 containerd[1551]: time="2024-12-13T01:10:21.529707222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:10:21.529987 containerd[1551]: time="2024-12-13T01:10:21.529961288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:10:21.530157 containerd[1551]: time="2024-12-13T01:10:21.530134343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:10:21.530157 containerd[1551]: time="2024-12-13T01:10:21.530153759Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:10:21.530312 containerd[1551]: time="2024-12-13T01:10:21.530256662Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:10:21.530337 containerd[1551]: time="2024-12-13T01:10:21.530316585Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:10:21.536449 containerd[1551]: time="2024-12-13T01:10:21.536267425Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:10:21.537652 containerd[1551]: time="2024-12-13T01:10:21.537624119Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:10:21.537684 containerd[1551]: time="2024-12-13T01:10:21.537653354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:10:21.537684 containerd[1551]: time="2024-12-13T01:10:21.537669775Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:10:21.537727 containerd[1551]: time="2024-12-13T01:10:21.537700242Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:10:21.538110 containerd[1551]: time="2024-12-13T01:10:21.537870732Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:10:21.539081 containerd[1551]: time="2024-12-13T01:10:21.539057467Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:10:21.539232 containerd[1551]: time="2024-12-13T01:10:21.539210514Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:10:21.539257 containerd[1551]: time="2024-12-13T01:10:21.539240691Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:10:21.539277 containerd[1551]: time="2024-12-13T01:10:21.539255799Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:10:21.539297 containerd[1551]: time="2024-12-13T01:10:21.539289132Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:10:21.539316 containerd[1551]: time="2024-12-13T01:10:21.539304310Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:10:21.539335 containerd[1551]: time="2024-12-13T01:10:21.539318887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:10:21.539355 containerd[1551]: time="2024-12-13T01:10:21.539334567Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540406958Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540440340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540473202Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540488651Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540509691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540524088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540560877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540574111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540586064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540598637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540610179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540637160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540649232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.542934 containerd[1551]: time="2024-12-13T01:10:21.540663399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540675682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540687574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540715246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540737929Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540757836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540784356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540795086Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540873924Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540890795Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540901465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540913167Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540939236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540953172Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:10:21.543200 containerd[1551]: time="2024-12-13T01:10:21.540969092Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:10:21.547848 containerd[1551]: time="2024-12-13T01:10:21.540979832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:10:21.547894 containerd[1551]: time="2024-12-13T01:10:21.545475864Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:10:21.547894 containerd[1551]: time="2024-12-13T01:10:21.545944152Z" level=info msg="Connect containerd service" Dec 13 01:10:21.547894 containerd[1551]: time="2024-12-13T01:10:21.546476290Z" level=info msg="using legacy CRI server" Dec 13 01:10:21.547894 containerd[1551]: time="2024-12-13T01:10:21.546513329Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:10:21.547894 containerd[1551]: time="2024-12-13T01:10:21.546647571Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:10:21.547894 containerd[1551]: time="2024-12-13T01:10:21.547417134Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:10:21.548849 containerd[1551]: time="2024-12-13T01:10:21.548780301Z" level=info msg="Start subscribing containerd event" Dec 13 01:10:21.549632 containerd[1551]: time="2024-12-13T01:10:21.548972662Z" level=info msg="Start recovering state" Dec 13 01:10:21.549632 containerd[1551]: time="2024-12-13T01:10:21.549294185Z" level=info msg="Start event monitor" Dec 13 01:10:21.549632 containerd[1551]: time="2024-12-13T01:10:21.549314954Z" level=info msg="Start snapshots syncer" Dec 13 01:10:21.549632 containerd[1551]: time="2024-12-13T01:10:21.549324291Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:10:21.549632 containerd[1551]: time="2024-12-13T01:10:21.549357984Z" level=info msg="Start streaming server" Dec 13 01:10:21.550186 containerd[1551]: time="2024-12-13T01:10:21.550160629Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:10:21.550318 containerd[1551]: time="2024-12-13T01:10:21.550290733Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:10:21.550460 containerd[1551]: time="2024-12-13T01:10:21.550447347Z" level=info msg="containerd successfully booted in 0.055262s" Dec 13 01:10:21.550672 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:10:21.559214 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:10:21.559595 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:10:21.562006 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:10:21.569456 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:10:21.599792 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:10:21.626129 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:10:21.636606 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:10:21.645137 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:10:21.645564 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:10:21.654977 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:10:21.664709 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:10:21.674689 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:10:21.676976 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:10:21.678551 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:10:21.726093 tar[1548]: linux-amd64/LICENSE Dec 13 01:10:21.726093 tar[1548]: linux-amd64/README.md Dec 13 01:10:21.741073 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:10:22.145961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:10:22.147706 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:10:22.149582 systemd[1]: Startup finished in 6.119s (kernel) + 3.755s (userspace) = 9.875s. Dec 13 01:10:22.170828 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:10:22.632896 kubelet[1655]: E1213 01:10:22.632738 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:10:22.637556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:10:22.637863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:10:30.974574 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:10:30.986577 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:40350.service - OpenSSH per-connection server daemon (10.0.0.1:40350). Dec 13 01:10:31.023422 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 40350 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:31.025223 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:31.033775 systemd-logind[1532]: New session 1 of user core. Dec 13 01:10:31.034827 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:10:31.043553 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:10:31.054400 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:10:31.056787 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:10:31.065230 (systemd)[1675]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:10:31.169601 systemd[1675]: Queued start job for default target default.target. Dec 13 01:10:31.170004 systemd[1675]: Created slice app.slice - User Application Slice. Dec 13 01:10:31.170021 systemd[1675]: Reached target paths.target - Paths. Dec 13 01:10:31.170033 systemd[1675]: Reached target timers.target - Timers. Dec 13 01:10:31.182436 systemd[1675]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:10:31.189858 systemd[1675]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:10:31.189958 systemd[1675]: Reached target sockets.target - Sockets. Dec 13 01:10:31.189978 systemd[1675]: Reached target basic.target - Basic System. Dec 13 01:10:31.190037 systemd[1675]: Reached target default.target - Main User Target. Dec 13 01:10:31.190083 systemd[1675]: Startup finished in 118ms. Dec 13 01:10:31.190516 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:10:31.191923 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:10:31.248575 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:40352.service - OpenSSH per-connection server daemon (10.0.0.1:40352). Dec 13 01:10:31.279136 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 40352 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:31.280713 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:31.284894 systemd-logind[1532]: New session 2 of user core. Dec 13 01:10:31.298637 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:10:31.353800 sshd[1687]: pam_unix(sshd:session): session closed for user core Dec 13 01:10:31.367793 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:40354.service - OpenSSH per-connection server daemon (10.0.0.1:40354). Dec 13 01:10:31.368301 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:40352.service: Deactivated successfully. Dec 13 01:10:31.371196 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:10:31.373229 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:10:31.373894 systemd-logind[1532]: Removed session 2. Dec 13 01:10:31.398076 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 40354 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:31.400009 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:31.404379 systemd-logind[1532]: New session 3 of user core. Dec 13 01:10:31.416654 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:10:31.466984 sshd[1692]: pam_unix(sshd:session): session closed for user core Dec 13 01:10:31.479581 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:40368.service - OpenSSH per-connection server daemon (10.0.0.1:40368). Dec 13 01:10:31.480051 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:40354.service: Deactivated successfully. Dec 13 01:10:31.483096 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:10:31.483738 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:10:31.484543 systemd-logind[1532]: Removed session 3. Dec 13 01:10:31.510091 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 40368 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:31.511797 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:31.515681 systemd-logind[1532]: New session 4 of user core. Dec 13 01:10:31.529674 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:10:31.584492 sshd[1700]: pam_unix(sshd:session): session closed for user core Dec 13 01:10:31.592573 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:40380.service - OpenSSH per-connection server daemon (10.0.0.1:40380). Dec 13 01:10:31.593009 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:40368.service: Deactivated successfully. Dec 13 01:10:31.596275 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:10:31.597127 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:10:31.598047 systemd-logind[1532]: Removed session 4. Dec 13 01:10:31.621169 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 40380 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:31.622676 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:31.626617 systemd-logind[1532]: New session 5 of user core. Dec 13 01:10:31.633593 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:10:31.690681 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:10:31.691038 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:10:31.711905 sudo[1715]: pam_unix(sudo:session): session closed for user root Dec 13 01:10:31.713984 sshd[1708]: pam_unix(sshd:session): session closed for user core Dec 13 01:10:31.727712 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:40388.service - OpenSSH per-connection server daemon (10.0.0.1:40388). Dec 13 01:10:31.728525 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:40380.service: Deactivated successfully. Dec 13 01:10:31.730914 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:10:31.730951 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:10:31.732746 systemd-logind[1532]: Removed session 5. Dec 13 01:10:31.756741 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 40388 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:31.758339 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:31.762470 systemd-logind[1532]: New session 6 of user core. Dec 13 01:10:31.772622 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:10:31.827231 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:10:31.827577 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:10:31.831686 sudo[1725]: pam_unix(sudo:session): session closed for user root Dec 13 01:10:31.838585 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:10:31.838929 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:10:31.857552 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:10:31.859648 auditctl[1728]: No rules Dec 13 01:10:31.860903 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:10:31.861248 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:10:31.863062 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:10:31.898910 augenrules[1747]: No rules Dec 13 01:10:31.900890 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:10:31.902200 sudo[1724]: pam_unix(sudo:session): session closed for user root Dec 13 01:10:31.904468 sshd[1717]: pam_unix(sshd:session): session closed for user core Dec 13 01:10:31.913624 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:40394.service - OpenSSH per-connection server daemon (10.0.0.1:40394). Dec 13 01:10:31.914085 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:40388.service: Deactivated successfully. Dec 13 01:10:31.916294 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:10:31.917429 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:10:31.918529 systemd-logind[1532]: Removed session 6. Dec 13 01:10:31.944396 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 40394 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:10:31.946049 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:10:31.949798 systemd-logind[1532]: New session 7 of user core. Dec 13 01:10:31.965588 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:10:32.018804 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:10:32.019135 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:10:32.320555 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:10:32.320788 (dockerd)[1779]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:10:32.616629 dockerd[1779]: time="2024-12-13T01:10:32.616469639Z" level=info msg="Starting up" Dec 13 01:10:32.687678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:10:32.697500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:10:33.293898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:10:33.299730 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:10:33.439439 kubelet[1815]: E1213 01:10:33.439335 1815 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:10:33.446911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:10:33.447220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:10:33.493443 dockerd[1779]: time="2024-12-13T01:10:33.493385309Z" level=info msg="Loading containers: start." Dec 13 01:10:33.601409 kernel: Initializing XFRM netlink socket Dec 13 01:10:33.679039 systemd-networkd[1241]: docker0: Link UP Dec 13 01:10:33.701048 dockerd[1779]: time="2024-12-13T01:10:33.700993679Z" level=info msg="Loading containers: done." Dec 13 01:10:33.716131 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2766225000-merged.mount: Deactivated successfully. Dec 13 01:10:33.720209 dockerd[1779]: time="2024-12-13T01:10:33.720156215Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:10:33.720295 dockerd[1779]: time="2024-12-13T01:10:33.720266392Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:10:33.720420 dockerd[1779]: time="2024-12-13T01:10:33.720398690Z" level=info msg="Daemon has completed initialization" Dec 13 01:10:33.759446 dockerd[1779]: time="2024-12-13T01:10:33.759379232Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:10:33.760008 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:10:34.496274 containerd[1551]: time="2024-12-13T01:10:34.496227135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:10:37.127641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462780285.mount: Deactivated successfully. Dec 13 01:10:38.218535 containerd[1551]: time="2024-12-13T01:10:38.218477393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:38.219885 containerd[1551]: time="2024-12-13T01:10:38.219847803Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:10:38.221380 containerd[1551]: time="2024-12-13T01:10:38.221316948Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:38.224187 containerd[1551]: time="2024-12-13T01:10:38.224140914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:38.225090 containerd[1551]: time="2024-12-13T01:10:38.225048627Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.728779041s" Dec 13 01:10:38.225090 containerd[1551]: time="2024-12-13T01:10:38.225082190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:10:38.245481 containerd[1551]: time="2024-12-13T01:10:38.245437904Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:10:39.936243 containerd[1551]: time="2024-12-13T01:10:39.936179388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:39.963632 containerd[1551]: time="2024-12-13T01:10:39.963573591Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:10:39.965954 containerd[1551]: time="2024-12-13T01:10:39.965921835Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:39.969433 containerd[1551]: time="2024-12-13T01:10:39.969406160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:39.970495 containerd[1551]: time="2024-12-13T01:10:39.970470175Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.724995913s" Dec 13 01:10:39.970495 containerd[1551]: time="2024-12-13T01:10:39.970494541Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:10:39.994583 containerd[1551]: time="2024-12-13T01:10:39.994537029Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:10:41.101300 containerd[1551]: time="2024-12-13T01:10:41.101240303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:41.102137 containerd[1551]: time="2024-12-13T01:10:41.102093883Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:10:41.103374 containerd[1551]: time="2024-12-13T01:10:41.103336935Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:41.106064 containerd[1551]: time="2024-12-13T01:10:41.106037599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:41.107077 containerd[1551]: time="2024-12-13T01:10:41.107052342Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.112472423s" Dec 13 01:10:41.107117 containerd[1551]: time="2024-12-13T01:10:41.107077049Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:10:41.128696 containerd[1551]: time="2024-12-13T01:10:41.128653322Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:10:42.172059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915905853.mount: Deactivated successfully. Dec 13 01:10:42.726693 containerd[1551]: time="2024-12-13T01:10:42.726620411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:42.727517 containerd[1551]: time="2024-12-13T01:10:42.727439316Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:10:42.728700 containerd[1551]: time="2024-12-13T01:10:42.728666568Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:42.730765 containerd[1551]: time="2024-12-13T01:10:42.730734696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:42.731379 containerd[1551]: time="2024-12-13T01:10:42.731323400Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.602630975s" Dec 13 01:10:42.731414 containerd[1551]: time="2024-12-13T01:10:42.731379586Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:10:42.755843 containerd[1551]: time="2024-12-13T01:10:42.755803199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:10:43.335879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651392216.mount: Deactivated successfully. Dec 13 01:10:43.697352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:10:43.707502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:10:43.847506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:10:43.853220 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:10:44.152632 kubelet[2081]: E1213 01:10:44.151939 2081 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:10:44.156958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:10:44.157230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:10:44.937545 containerd[1551]: time="2024-12-13T01:10:44.937468430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:44.938412 containerd[1551]: time="2024-12-13T01:10:44.938333592Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:10:44.939792 containerd[1551]: time="2024-12-13T01:10:44.939750730Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:44.944303 containerd[1551]: time="2024-12-13T01:10:44.944256309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:44.945513 containerd[1551]: time="2024-12-13T01:10:44.945468723Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.189482842s" Dec 13 01:10:44.945562 containerd[1551]: time="2024-12-13T01:10:44.945512034Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:10:44.967252 containerd[1551]: time="2024-12-13T01:10:44.967205237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:10:45.433950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686339067.mount: Deactivated successfully. Dec 13 01:10:45.440377 containerd[1551]: time="2024-12-13T01:10:45.440316920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:45.441028 containerd[1551]: time="2024-12-13T01:10:45.440965537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:10:45.442074 containerd[1551]: time="2024-12-13T01:10:45.442043608Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:45.444163 containerd[1551]: time="2024-12-13T01:10:45.444128017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:45.444824 containerd[1551]: time="2024-12-13T01:10:45.444777736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 477.53075ms" Dec 13 01:10:45.444824 containerd[1551]: time="2024-12-13T01:10:45.444815847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:10:45.465820 containerd[1551]: time="2024-12-13T01:10:45.465776346Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:10:45.976264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833957024.mount: Deactivated successfully. Dec 13 01:10:47.595994 containerd[1551]: time="2024-12-13T01:10:47.595931126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:47.596824 containerd[1551]: time="2024-12-13T01:10:47.596758087Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:10:47.598062 containerd[1551]: time="2024-12-13T01:10:47.598033699Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:47.601198 containerd[1551]: time="2024-12-13T01:10:47.601158519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:10:47.602193 containerd[1551]: time="2024-12-13T01:10:47.602165348Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.136196641s" Dec 13 01:10:47.602234 containerd[1551]: time="2024-12-13T01:10:47.602196596Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:10:49.759788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:10:49.770570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:10:49.790081 systemd[1]: Reloading requested from client PID 2258 ('systemctl') (unit session-7.scope)... Dec 13 01:10:49.790100 systemd[1]: Reloading... Dec 13 01:10:49.878392 zram_generator::config[2300]: No configuration found. Dec 13 01:10:50.111766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:10:50.185377 systemd[1]: Reloading finished in 394 ms. Dec 13 01:10:50.238543 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:10:50.238715 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:10:50.239393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:10:50.241437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:10:50.381838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:10:50.402846 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:10:50.446736 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:10:50.446736 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:10:50.446736 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:10:50.447657 kubelet[2359]: I1213 01:10:50.447603 2359 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:10:50.939319 kubelet[2359]: I1213 01:10:50.939280 2359 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:10:50.939319 kubelet[2359]: I1213 01:10:50.939313 2359 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:10:50.940981 kubelet[2359]: I1213 01:10:50.939853 2359 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:10:50.956120 kubelet[2359]: E1213 01:10:50.956088 2359 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:50.958891 kubelet[2359]: I1213 01:10:50.958860 2359 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:10:50.969063 kubelet[2359]: I1213 01:10:50.969026 2359 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:10:50.970231 kubelet[2359]: I1213 01:10:50.970208 2359 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:10:50.970391 kubelet[2359]: I1213 01:10:50.970361 2359 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:10:50.970487 kubelet[2359]: I1213 01:10:50.970405 2359 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:10:50.970487 kubelet[2359]: I1213 01:10:50.970418 2359 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:10:50.970553 kubelet[2359]: I1213 01:10:50.970538 2359 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:10:50.970650 kubelet[2359]: I1213 01:10:50.970630 2359 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:10:50.970686 kubelet[2359]: I1213 01:10:50.970652 2359 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:10:50.970686 kubelet[2359]: I1213 01:10:50.970680 2359 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:10:50.970738 kubelet[2359]: I1213 01:10:50.970695 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:10:50.971794 kubelet[2359]: I1213 01:10:50.971767 2359 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:10:50.972402 kubelet[2359]: W1213 01:10:50.972204 2359 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:50.972402 kubelet[2359]: E1213 01:10:50.972251 2359 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:50.972670 kubelet[2359]: W1213 01:10:50.972629 2359 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:50.972670 kubelet[2359]: E1213 01:10:50.972665 2359 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:50.974642 kubelet[2359]: I1213 01:10:50.974615 2359 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:10:50.974706 kubelet[2359]: W1213 01:10:50.974690 2359 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:10:50.975443 kubelet[2359]: I1213 01:10:50.975416 2359 server.go:1256] "Started kubelet" Dec 13 01:10:50.975787 kubelet[2359]: I1213 01:10:50.975546 2359 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:10:50.975787 kubelet[2359]: I1213 01:10:50.975736 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:10:50.979833 kubelet[2359]: I1213 01:10:50.979803 2359 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:10:50.981664 kubelet[2359]: I1213 01:10:50.980754 2359 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:10:50.982661 kubelet[2359]: I1213 01:10:50.981966 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:10:50.983557 kubelet[2359]: E1213 01:10:50.983171 2359 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109755b95ad491 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:10:50.975392913 +0000 UTC m=+0.568420883,LastTimestamp:2024-12-13 01:10:50.975392913 +0000 UTC m=+0.568420883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:10:50.984667 kubelet[2359]: I1213 01:10:50.984034 2359 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:10:50.984667 kubelet[2359]: I1213 01:10:50.984128 2359 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:10:50.984667 kubelet[2359]: I1213 01:10:50.984192 2359 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:10:50.984667 kubelet[2359]: W1213 01:10:50.984446 2359 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:50.984667 kubelet[2359]: E1213 01:10:50.984488 2359 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:50.985405 kubelet[2359]: E1213 01:10:50.985391 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Dec 13 01:10:50.987202 kubelet[2359]: I1213 01:10:50.987176 2359 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:10:50.987436 kubelet[2359]: E1213 01:10:50.987417 2359 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:10:50.988165 kubelet[2359]: I1213 01:10:50.988137 2359 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:10:50.988165 kubelet[2359]: I1213 01:10:50.988155 2359 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:10:51.000974 kubelet[2359]: I1213 01:10:51.000944 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:10:51.002086 kubelet[2359]: I1213 01:10:51.002067 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:10:51.002086 kubelet[2359]: I1213 01:10:51.002089 2359 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:10:51.002196 kubelet[2359]: I1213 01:10:51.002110 2359 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:10:51.002196 kubelet[2359]: E1213 01:10:51.002157 2359 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:10:51.007127 kubelet[2359]: W1213 01:10:51.007081 2359 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:51.007127 kubelet[2359]: E1213 01:10:51.007129 2359 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:51.013559 kubelet[2359]: I1213 01:10:51.013503 2359 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:10:51.013559 kubelet[2359]: I1213 01:10:51.013529 2359 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:10:51.013559 kubelet[2359]: I1213 01:10:51.013545 2359 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:10:51.085449 kubelet[2359]: I1213 01:10:51.085422 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:10:51.085777 kubelet[2359]: E1213 01:10:51.085757 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Dec 13 01:10:51.103029 kubelet[2359]: E1213 01:10:51.102995 2359 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:10:51.186858 kubelet[2359]: E1213 01:10:51.186822 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Dec 13 01:10:51.286951 kubelet[2359]: I1213 01:10:51.286818 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:10:51.287108 kubelet[2359]: E1213 01:10:51.287088 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Dec 13 01:10:51.303170 kubelet[2359]: E1213 01:10:51.303136 2359 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:10:51.558815 kubelet[2359]: I1213 01:10:51.558666 2359 policy_none.go:49] "None policy: Start" Dec 13 01:10:51.559842 kubelet[2359]: I1213 01:10:51.559801 2359 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:10:51.559842 kubelet[2359]: I1213 01:10:51.559841 2359 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:10:51.566744 kubelet[2359]: I1213 01:10:51.566716 2359 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:10:51.567021 kubelet[2359]: I1213 01:10:51.567000 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:10:51.568385 kubelet[2359]: E1213 01:10:51.568352 2359 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:10:51.588003 kubelet[2359]: E1213 01:10:51.587988 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Dec 13 01:10:51.688525 kubelet[2359]: I1213 01:10:51.688493 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:10:51.688918 kubelet[2359]: E1213 01:10:51.688893 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Dec 13 01:10:51.704070 kubelet[2359]: I1213 01:10:51.704042 2359 topology_manager.go:215] "Topology Admit Handler" podUID="66f586fc4d0d1cb9b2b130287525e8c5" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:10:51.705406 kubelet[2359]: I1213 01:10:51.705363 2359 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:10:51.706228 kubelet[2359]: I1213 01:10:51.706206 2359 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:10:51.788691 kubelet[2359]: I1213 01:10:51.788661 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:51.788691 kubelet[2359]: I1213 01:10:51.788696 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:51.788794 kubelet[2359]: I1213 01:10:51.788763 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:51.788824 kubelet[2359]: I1213 01:10:51.788810 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:51.788848 kubelet[2359]: I1213 01:10:51.788835 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:51.788872 kubelet[2359]: I1213 01:10:51.788865 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:10:51.788905 kubelet[2359]: I1213 01:10:51.788894 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66f586fc4d0d1cb9b2b130287525e8c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"66f586fc4d0d1cb9b2b130287525e8c5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:10:51.788929 kubelet[2359]: I1213 01:10:51.788923 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66f586fc4d0d1cb9b2b130287525e8c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"66f586fc4d0d1cb9b2b130287525e8c5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:10:51.788964 kubelet[2359]: I1213 01:10:51.788953 2359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66f586fc4d0d1cb9b2b130287525e8c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"66f586fc4d0d1cb9b2b130287525e8c5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:10:52.010711 kubelet[2359]: E1213 01:10:52.010578 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:52.010711 kubelet[2359]: E1213 01:10:52.010639 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:52.011248 containerd[1551]: time="2024-12-13T01:10:52.011200733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:10:52.011680 containerd[1551]: time="2024-12-13T01:10:52.011284500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:66f586fc4d0d1cb9b2b130287525e8c5,Namespace:kube-system,Attempt:0,}" Dec 13 01:10:52.012736 kubelet[2359]: E1213 01:10:52.012717 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:52.013069 containerd[1551]: time="2024-12-13T01:10:52.013037337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:10:52.062589 kubelet[2359]: W1213 01:10:52.062562 2359 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:52.062653 kubelet[2359]: E1213 01:10:52.062597 2359 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:52.077896 kubelet[2359]: W1213 01:10:52.077848 2359 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:52.077896 kubelet[2359]: E1213 01:10:52.077893 2359 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:52.109909 kubelet[2359]: W1213 01:10:52.109868 2359 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:52.109909 kubelet[2359]: E1213 01:10:52.109910 2359 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:52.335276 kubelet[2359]: W1213 01:10:52.335119 2359 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:52.335276 kubelet[2359]: E1213 01:10:52.335196 2359 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:52.388665 kubelet[2359]: E1213 01:10:52.388624 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" Dec 13 01:10:52.490334 kubelet[2359]: I1213 01:10:52.490288 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:10:52.490786 kubelet[2359]: E1213 01:10:52.490745 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Dec 13 01:10:52.530931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104855019.mount: Deactivated successfully. Dec 13 01:10:52.540390 containerd[1551]: time="2024-12-13T01:10:52.540322295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:10:52.542390 containerd[1551]: time="2024-12-13T01:10:52.542341772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:10:52.543482 containerd[1551]: time="2024-12-13T01:10:52.543425304Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:10:52.544427 containerd[1551]: time="2024-12-13T01:10:52.544389833Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:10:52.545387 containerd[1551]: time="2024-12-13T01:10:52.545316851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:10:52.546346 containerd[1551]: time="2024-12-13T01:10:52.546304383Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:10:52.547229 containerd[1551]: time="2024-12-13T01:10:52.547191026Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:10:52.549167 containerd[1551]: time="2024-12-13T01:10:52.549124792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:10:52.550824 containerd[1551]: time="2024-12-13T01:10:52.550771130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.673359ms" Dec 13 01:10:52.551562 containerd[1551]: time="2024-12-13T01:10:52.551529202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.165954ms" Dec 13 01:10:52.553817 containerd[1551]: time="2024-12-13T01:10:52.553774693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 542.501675ms" Dec 13 01:10:52.703911 containerd[1551]: time="2024-12-13T01:10:52.703295107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:10:52.703911 containerd[1551]: time="2024-12-13T01:10:52.703354769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:10:52.703911 containerd[1551]: time="2024-12-13T01:10:52.703511783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:10:52.703911 containerd[1551]: time="2024-12-13T01:10:52.703715325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:10:52.704178 containerd[1551]: time="2024-12-13T01:10:52.704086721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:10:52.704178 containerd[1551]: time="2024-12-13T01:10:52.704143889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:10:52.704178 containerd[1551]: time="2024-12-13T01:10:52.704164257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:10:52.704348 containerd[1551]: time="2024-12-13T01:10:52.704279012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:10:52.711732 containerd[1551]: time="2024-12-13T01:10:52.711643042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:10:52.711778 containerd[1551]: time="2024-12-13T01:10:52.711748480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:10:52.711821 containerd[1551]: time="2024-12-13T01:10:52.711787713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:10:52.711975 containerd[1551]: time="2024-12-13T01:10:52.711931724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:10:52.770707 containerd[1551]: time="2024-12-13T01:10:52.770660311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:66f586fc4d0d1cb9b2b130287525e8c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b1861fdc6e1d3d04d26f6ec472290cc9d1b7d0864927f53a34664ac91205d0f\"" Dec 13 01:10:52.772660 kubelet[2359]: E1213 01:10:52.772570 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:52.775110 containerd[1551]: time="2024-12-13T01:10:52.774986103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd50e705d317b4f41084df543d8ff7d672748cebc9db935a529f50e151d69eb7\"" Dec 13 01:10:52.775516 kubelet[2359]: E1213 01:10:52.775490 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:52.777837 containerd[1551]: time="2024-12-13T01:10:52.777806612Z" level=info msg="CreateContainer within sandbox \"bd50e705d317b4f41084df543d8ff7d672748cebc9db935a529f50e151d69eb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:10:52.777876 containerd[1551]: time="2024-12-13T01:10:52.777842430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec13a4b3ae63317d44c3dce5d5c0944db771082da78121e8bb72bf758e697d1b\"" Dec 13 01:10:52.778484 containerd[1551]: time="2024-12-13T01:10:52.778446953Z" level=info msg="CreateContainer within sandbox \"8b1861fdc6e1d3d04d26f6ec472290cc9d1b7d0864927f53a34664ac91205d0f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:10:52.778660 kubelet[2359]: E1213 01:10:52.778625 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:52.781224 containerd[1551]: time="2024-12-13T01:10:52.781180900Z" level=info msg="CreateContainer within sandbox \"ec13a4b3ae63317d44c3dce5d5c0944db771082da78121e8bb72bf758e697d1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:10:52.992238 kubelet[2359]: E1213 01:10:52.992111 2359 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.74:6443: connect: connection refused Dec 13 01:10:53.288071 containerd[1551]: time="2024-12-13T01:10:53.287951146Z" level=info msg="CreateContainer within sandbox \"ec13a4b3ae63317d44c3dce5d5c0944db771082da78121e8bb72bf758e697d1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"701ce4f3e9e7064a619c62406c835e9ef57552d5792e91b4438a7c8399ca6c81\"" Dec 13 01:10:53.288587 containerd[1551]: time="2024-12-13T01:10:53.288555208Z" level=info msg="StartContainer for \"701ce4f3e9e7064a619c62406c835e9ef57552d5792e91b4438a7c8399ca6c81\"" Dec 13 01:10:53.294176 containerd[1551]: time="2024-12-13T01:10:53.294112831Z" level=info msg="CreateContainer within sandbox \"8b1861fdc6e1d3d04d26f6ec472290cc9d1b7d0864927f53a34664ac91205d0f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bfe7ba9db821ba479cc0f5a05b1deb5fb681fc9cc76606ec0be4822f4f768a45\"" Dec 13 01:10:53.294776 containerd[1551]: time="2024-12-13T01:10:53.294732222Z" level=info msg="StartContainer for \"bfe7ba9db821ba479cc0f5a05b1deb5fb681fc9cc76606ec0be4822f4f768a45\"" Dec 13 01:10:53.295447 containerd[1551]: time="2024-12-13T01:10:53.295285630Z" level=info msg="CreateContainer within sandbox \"bd50e705d317b4f41084df543d8ff7d672748cebc9db935a529f50e151d69eb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8bd7ca14ff01d0088412ed5399043ff2bf906342b4e3a41c2fafde7de3b42324\"" Dec 13 01:10:53.295730 containerd[1551]: time="2024-12-13T01:10:53.295695910Z" level=info msg="StartContainer for \"8bd7ca14ff01d0088412ed5399043ff2bf906342b4e3a41c2fafde7de3b42324\"" Dec 13 01:10:53.371327 containerd[1551]: time="2024-12-13T01:10:53.371219033Z" level=info msg="StartContainer for \"bfe7ba9db821ba479cc0f5a05b1deb5fb681fc9cc76606ec0be4822f4f768a45\" returns successfully" Dec 13 01:10:53.371327 containerd[1551]: time="2024-12-13T01:10:53.371254469Z" level=info msg="StartContainer for \"8bd7ca14ff01d0088412ed5399043ff2bf906342b4e3a41c2fafde7de3b42324\" returns successfully" Dec 13 01:10:53.371327 containerd[1551]: time="2024-12-13T01:10:53.371299764Z" level=info msg="StartContainer for \"701ce4f3e9e7064a619c62406c835e9ef57552d5792e91b4438a7c8399ca6c81\" returns successfully" Dec 13 01:10:54.027596 kubelet[2359]: E1213 01:10:54.027574 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:54.029269 kubelet[2359]: E1213 01:10:54.028343 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:54.029269 kubelet[2359]: E1213 01:10:54.029220 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:54.096444 kubelet[2359]: I1213 01:10:54.095228 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:10:54.107419 kubelet[2359]: I1213 01:10:54.107378 2359 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:10:54.121540 kubelet[2359]: E1213 01:10:54.121501 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:54.162162 kubelet[2359]: E1213 01:10:54.162116 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 13 01:10:54.221690 kubelet[2359]: E1213 01:10:54.221629 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:54.322305 kubelet[2359]: E1213 01:10:54.322173 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:54.422713 kubelet[2359]: E1213 01:10:54.422672 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:54.523385 kubelet[2359]: E1213 01:10:54.523355 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:54.624156 kubelet[2359]: E1213 01:10:54.624048 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:54.724564 kubelet[2359]: E1213 01:10:54.724529 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:54.825133 kubelet[2359]: E1213 01:10:54.825097 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:54.925738 kubelet[2359]: E1213 01:10:54.925609 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:55.025873 kubelet[2359]: E1213 01:10:55.025839 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:10:55.031347 kubelet[2359]: E1213 01:10:55.031313 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:55.031791 kubelet[2359]: E1213 01:10:55.031515 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:55.031791 kubelet[2359]: E1213 01:10:55.031648 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:55.974387 kubelet[2359]: I1213 01:10:55.974326 2359 apiserver.go:52] "Watching apiserver" Dec 13 01:10:55.984409 kubelet[2359]: I1213 01:10:55.984361 2359 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:10:56.038969 kubelet[2359]: E1213 01:10:56.038951 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:57.033244 kubelet[2359]: E1213 01:10:57.033211 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:57.450048 systemd[1]: Reloading requested from client PID 2637 ('systemctl') (unit session-7.scope)... Dec 13 01:10:57.450636 systemd[1]: Reloading... Dec 13 01:10:57.547396 zram_generator::config[2685]: No configuration found. Dec 13 01:10:57.662227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:10:57.739478 systemd[1]: Reloading finished in 288 ms. Dec 13 01:10:57.773747 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:10:57.788411 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:10:57.788835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:10:57.800548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:10:57.937520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:10:57.942064 (kubelet)[2731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:10:57.987945 kubelet[2731]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:10:57.987945 kubelet[2731]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:10:57.987945 kubelet[2731]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:10:57.988342 kubelet[2731]: I1213 01:10:57.987992 2731 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:10:57.992814 kubelet[2731]: I1213 01:10:57.992690 2731 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:10:57.992814 kubelet[2731]: I1213 01:10:57.992715 2731 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:10:57.992956 kubelet[2731]: I1213 01:10:57.992944 2731 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:10:57.994350 kubelet[2731]: I1213 01:10:57.994333 2731 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:10:57.998526 kubelet[2731]: I1213 01:10:57.998490 2731 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:10:58.006191 kubelet[2731]: I1213 01:10:58.006176 2731 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:10:58.006757 kubelet[2731]: I1213 01:10:58.006739 2731 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:10:58.006925 kubelet[2731]: I1213 01:10:58.006901 2731 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:10:58.007018 kubelet[2731]: I1213 01:10:58.006935 2731 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:10:58.007018 kubelet[2731]: I1213 01:10:58.006943 2731 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:10:58.007018 kubelet[2731]: I1213 01:10:58.006974 2731 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:10:58.007078 kubelet[2731]: I1213 01:10:58.007067 2731 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:10:58.007101 kubelet[2731]: I1213 01:10:58.007080 2731 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:10:58.007124 kubelet[2731]: I1213 01:10:58.007106 2731 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:10:58.007124 kubelet[2731]: I1213 01:10:58.007122 2731 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:10:58.007700 kubelet[2731]: I1213 01:10:58.007680 2731 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:10:58.008085 kubelet[2731]: I1213 01:10:58.007923 2731 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:10:58.008437 kubelet[2731]: I1213 01:10:58.008415 2731 server.go:1256] "Started kubelet" Dec 13 01:10:58.008665 kubelet[2731]: I1213 01:10:58.008650 2731 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:10:58.008700 kubelet[2731]: I1213 01:10:58.008682 2731 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:10:58.008919 kubelet[2731]: I1213 01:10:58.008896 2731 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:10:58.009433 kubelet[2731]: I1213 01:10:58.009411 2731 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:10:58.012810 kubelet[2731]: I1213 01:10:58.012687 2731 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:10:58.016856 kubelet[2731]: I1213 01:10:58.016773 2731 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:10:58.017145 kubelet[2731]: I1213 01:10:58.017122 2731 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:10:58.017291 kubelet[2731]: I1213 01:10:58.017274 2731 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:10:58.025490 kubelet[2731]: E1213 01:10:58.025433 2731 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:10:58.027987 kubelet[2731]: I1213 01:10:58.027949 2731 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:10:58.027987 kubelet[2731]: I1213 01:10:58.027965 2731 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:10:58.028127 kubelet[2731]: I1213 01:10:58.028023 2731 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:10:58.030902 kubelet[2731]: I1213 01:10:58.030388 2731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:10:58.031590 kubelet[2731]: I1213 01:10:58.031573 2731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:10:58.031641 kubelet[2731]: I1213 01:10:58.031597 2731 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:10:58.031641 kubelet[2731]: I1213 01:10:58.031614 2731 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:10:58.031689 kubelet[2731]: E1213 01:10:58.031660 2731 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:10:58.074316 kubelet[2731]: I1213 01:10:58.074277 2731 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:10:58.074316 kubelet[2731]: I1213 01:10:58.074306 2731 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:10:58.074316 kubelet[2731]: I1213 01:10:58.074325 2731 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:10:58.075386 kubelet[2731]: I1213 01:10:58.074586 2731 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:10:58.075386 kubelet[2731]: I1213 01:10:58.074642 2731 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:10:58.075386 kubelet[2731]: I1213 01:10:58.074651 2731 policy_none.go:49] "None policy: Start" Dec 13 01:10:58.075386 kubelet[2731]: I1213 01:10:58.075244 2731 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:10:58.075386 kubelet[2731]: I1213 01:10:58.075266 2731 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:10:58.075517 kubelet[2731]: I1213 01:10:58.075435 2731 state_mem.go:75] "Updated machine memory state" Dec 13 01:10:58.078763 kubelet[2731]: I1213 01:10:58.078646 2731 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:10:58.079625 kubelet[2731]: I1213 01:10:58.079597 2731 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:10:58.121439 kubelet[2731]: I1213 01:10:58.121403 2731 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:10:58.127117 kubelet[2731]: I1213 01:10:58.127079 2731 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:10:58.127225 kubelet[2731]: I1213 01:10:58.127157 2731 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:10:58.132106 kubelet[2731]: I1213 01:10:58.132073 2731 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:10:58.132231 kubelet[2731]: I1213 01:10:58.132139 2731 topology_manager.go:215] "Topology Admit Handler" podUID="66f586fc4d0d1cb9b2b130287525e8c5" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:10:58.132231 kubelet[2731]: I1213 01:10:58.132167 2731 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:10:58.140227 kubelet[2731]: E1213 01:10:58.140196 2731 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:10:58.217613 kubelet[2731]: I1213 01:10:58.217569 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66f586fc4d0d1cb9b2b130287525e8c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"66f586fc4d0d1cb9b2b130287525e8c5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:10:58.217613 kubelet[2731]: I1213 01:10:58.217609 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:58.217613 kubelet[2731]: I1213 01:10:58.217628 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:58.217849 kubelet[2731]: I1213 01:10:58.217661 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:58.217849 kubelet[2731]: I1213 01:10:58.217678 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:58.217849 kubelet[2731]: I1213 01:10:58.217695 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:10:58.217849 kubelet[2731]: I1213 01:10:58.217713 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66f586fc4d0d1cb9b2b130287525e8c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"66f586fc4d0d1cb9b2b130287525e8c5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:10:58.217849 kubelet[2731]: I1213 01:10:58.217733 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66f586fc4d0d1cb9b2b130287525e8c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"66f586fc4d0d1cb9b2b130287525e8c5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:10:58.218012 kubelet[2731]: I1213 01:10:58.217775 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:58.592406 kubelet[2731]: E1213 01:10:58.592240 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:58.592406 kubelet[2731]: E1213 01:10:58.592274 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:58.592753 kubelet[2731]: E1213 01:10:58.592730 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:59.007878 kubelet[2731]: I1213 01:10:59.007728 2731 apiserver.go:52] "Watching apiserver" Dec 13 01:10:59.018858 kubelet[2731]: I1213 01:10:59.017923 2731 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:10:59.048974 kubelet[2731]: E1213 01:10:59.048835 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:59.048974 kubelet[2731]: E1213 01:10:59.048890 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:59.056244 kubelet[2731]: E1213 01:10:59.056219 2731 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:10:59.056843 kubelet[2731]: E1213 01:10:59.056786 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:10:59.071766 kubelet[2731]: I1213 01:10:59.071730 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.07164574 podStartE2EDuration="1.07164574s" podCreationTimestamp="2024-12-13 01:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:10:59.064445479 +0000 UTC m=+1.118376670" watchObservedRunningTime="2024-12-13 01:10:59.07164574 +0000 UTC m=+1.125576921" Dec 13 01:10:59.071884 kubelet[2731]: I1213 01:10:59.071819 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.071803649 podStartE2EDuration="3.071803649s" podCreationTimestamp="2024-12-13 01:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:10:59.071801885 +0000 UTC m=+1.125733066" watchObservedRunningTime="2024-12-13 01:10:59.071803649 +0000 UTC m=+1.125734830" Dec 13 01:10:59.099396 kubelet[2731]: I1213 01:10:59.096011 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.095961292 podStartE2EDuration="1.095961292s" podCreationTimestamp="2024-12-13 01:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:10:59.081589434 +0000 UTC m=+1.135520615" watchObservedRunningTime="2024-12-13 01:10:59.095961292 +0000 UTC m=+1.149892473" Dec 13 01:11:00.049784 kubelet[2731]: E1213 01:11:00.049746 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:00.050228 kubelet[2731]: E1213 01:11:00.049913 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:01.237884 kubelet[2731]: E1213 01:11:01.237834 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:02.223988 sudo[1760]: pam_unix(sudo:session): session closed for user root Dec 13 01:11:02.226069 sshd[1753]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:02.230525 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:40394.service: Deactivated successfully. Dec 13 01:11:02.232659 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:11:02.233417 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:11:02.234219 systemd-logind[1532]: Removed session 7. Dec 13 01:11:05.171896 kubelet[2731]: E1213 01:11:05.171853 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:06.056317 kubelet[2731]: E1213 01:11:06.056281 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:06.279083 update_engine[1533]: I20241213 01:11:06.278998 1533 update_attempter.cc:509] Updating boot flags... Dec 13 01:11:06.311409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2826) Dec 13 01:11:06.338450 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2824) Dec 13 01:11:06.374419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2824) Dec 13 01:11:07.057293 kubelet[2731]: E1213 01:11:07.057257 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:08.533622 kubelet[2731]: E1213 01:11:08.533591 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:11.241716 kubelet[2731]: E1213 01:11:11.241676 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:11.648853 kubelet[2731]: I1213 01:11:11.648748 2731 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:11:11.649150 containerd[1551]: time="2024-12-13T01:11:11.649110944Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:11:11.649537 kubelet[2731]: I1213 01:11:11.649336 2731 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:11:11.755441 kubelet[2731]: I1213 01:11:11.755405 2731 topology_manager.go:215] "Topology Admit Handler" podUID="6b5b04de-1eb4-469e-8213-11d1b2c3a86a" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-gjppp" Dec 13 01:11:11.762140 kubelet[2731]: I1213 01:11:11.762115 2731 topology_manager.go:215] "Topology Admit Handler" podUID="7cd5a673-5d84-4e31-a475-103032e71a1f" podNamespace="kube-system" podName="kube-proxy-fkrbj" Dec 13 01:11:11.806111 kubelet[2731]: I1213 01:11:11.806063 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cd5a673-5d84-4e31-a475-103032e71a1f-kube-proxy\") pod \"kube-proxy-fkrbj\" (UID: \"7cd5a673-5d84-4e31-a475-103032e71a1f\") " pod="kube-system/kube-proxy-fkrbj" Dec 13 01:11:11.806111 kubelet[2731]: I1213 01:11:11.806107 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdclb\" (UniqueName: \"kubernetes.io/projected/6b5b04de-1eb4-469e-8213-11d1b2c3a86a-kube-api-access-qdclb\") pod \"tigera-operator-c7ccbd65-gjppp\" (UID: \"6b5b04de-1eb4-469e-8213-11d1b2c3a86a\") " pod="tigera-operator/tigera-operator-c7ccbd65-gjppp" Dec 13 01:11:11.806111 kubelet[2731]: I1213 01:11:11.806126 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cd5a673-5d84-4e31-a475-103032e71a1f-lib-modules\") pod \"kube-proxy-fkrbj\" (UID: \"7cd5a673-5d84-4e31-a475-103032e71a1f\") " pod="kube-system/kube-proxy-fkrbj" Dec 13 01:11:11.806279 kubelet[2731]: I1213 01:11:11.806145 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4gdf\" (UniqueName: \"kubernetes.io/projected/7cd5a673-5d84-4e31-a475-103032e71a1f-kube-api-access-c4gdf\") pod \"kube-proxy-fkrbj\" (UID: \"7cd5a673-5d84-4e31-a475-103032e71a1f\") " pod="kube-system/kube-proxy-fkrbj" Dec 13 01:11:11.806279 kubelet[2731]: I1213 01:11:11.806168 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b5b04de-1eb4-469e-8213-11d1b2c3a86a-var-lib-calico\") pod \"tigera-operator-c7ccbd65-gjppp\" (UID: \"6b5b04de-1eb4-469e-8213-11d1b2c3a86a\") " pod="tigera-operator/tigera-operator-c7ccbd65-gjppp" Dec 13 01:11:11.806279 kubelet[2731]: I1213 01:11:11.806213 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cd5a673-5d84-4e31-a475-103032e71a1f-xtables-lock\") pod \"kube-proxy-fkrbj\" (UID: \"7cd5a673-5d84-4e31-a475-103032e71a1f\") " pod="kube-system/kube-proxy-fkrbj" Dec 13 01:11:12.059826 containerd[1551]: time="2024-12-13T01:11:12.059793584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-gjppp,Uid:6b5b04de-1eb4-469e-8213-11d1b2c3a86a,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:11:12.068011 kubelet[2731]: E1213 01:11:12.067976 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:12.068321 containerd[1551]: time="2024-12-13T01:11:12.068268322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fkrbj,Uid:7cd5a673-5d84-4e31-a475-103032e71a1f,Namespace:kube-system,Attempt:0,}" Dec 13 01:11:12.205429 containerd[1551]: time="2024-12-13T01:11:12.205335424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:12.205429 containerd[1551]: time="2024-12-13T01:11:12.205403142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:12.205698 containerd[1551]: time="2024-12-13T01:11:12.205417459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:12.205698 containerd[1551]: time="2024-12-13T01:11:12.205514401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:12.212627 containerd[1551]: time="2024-12-13T01:11:12.212245657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:12.212627 containerd[1551]: time="2024-12-13T01:11:12.212315548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:12.212627 containerd[1551]: time="2024-12-13T01:11:12.212336207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:12.212627 containerd[1551]: time="2024-12-13T01:11:12.212472353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:12.242683 containerd[1551]: time="2024-12-13T01:11:12.242636604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fkrbj,Uid:7cd5a673-5d84-4e31-a475-103032e71a1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3025793160fb02733aa4a52468e267d90647975fe1961fe4e9ef3f0b3ffcb1a\"" Dec 13 01:11:12.244265 kubelet[2731]: E1213 01:11:12.243996 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:12.247428 containerd[1551]: time="2024-12-13T01:11:12.247398210Z" level=info msg="CreateContainer within sandbox \"c3025793160fb02733aa4a52468e267d90647975fe1961fe4e9ef3f0b3ffcb1a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:11:12.258578 containerd[1551]: time="2024-12-13T01:11:12.258539289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-gjppp,Uid:6b5b04de-1eb4-469e-8213-11d1b2c3a86a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"530e66fbed498900358aaea1a16372ed38370fe4644622465930b8c311bdb8f2\"" Dec 13 01:11:12.260328 containerd[1551]: time="2024-12-13T01:11:12.260278233Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:11:12.268926 containerd[1551]: time="2024-12-13T01:11:12.268890090Z" level=info msg="CreateContainer within sandbox \"c3025793160fb02733aa4a52468e267d90647975fe1961fe4e9ef3f0b3ffcb1a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aba97784420ecfe17010a8a109be9c2c437fa27fe989aec008d654a760dd6e59\"" Dec 13 01:11:12.269271 containerd[1551]: time="2024-12-13T01:11:12.269237885Z" level=info msg="StartContainer for \"aba97784420ecfe17010a8a109be9c2c437fa27fe989aec008d654a760dd6e59\"" Dec 13 01:11:12.321809 containerd[1551]: time="2024-12-13T01:11:12.321698440Z" level=info msg="StartContainer for \"aba97784420ecfe17010a8a109be9c2c437fa27fe989aec008d654a760dd6e59\" returns successfully" Dec 13 01:11:13.066027 kubelet[2731]: E1213 01:11:13.065996 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:13.150068 kubelet[2731]: I1213 01:11:13.150024 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fkrbj" podStartSLOduration=2.149977278 podStartE2EDuration="2.149977278s" podCreationTimestamp="2024-12-13 01:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:11:13.149738388 +0000 UTC m=+15.203669569" watchObservedRunningTime="2024-12-13 01:11:13.149977278 +0000 UTC m=+15.203908459" Dec 13 01:11:13.844119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140084172.mount: Deactivated successfully. Dec 13 01:11:14.317242 containerd[1551]: time="2024-12-13T01:11:14.317180091Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:14.365793 containerd[1551]: time="2024-12-13T01:11:14.365718216Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763669" Dec 13 01:11:14.414650 containerd[1551]: time="2024-12-13T01:11:14.414614814Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:14.462645 containerd[1551]: time="2024-12-13T01:11:14.462588176Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:14.463420 containerd[1551]: time="2024-12-13T01:11:14.463363344Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.203028284s" Dec 13 01:11:14.463500 containerd[1551]: time="2024-12-13T01:11:14.463426252Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:11:14.465197 containerd[1551]: time="2024-12-13T01:11:14.465167760Z" level=info msg="CreateContainer within sandbox \"530e66fbed498900358aaea1a16372ed38370fe4644622465930b8c311bdb8f2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:11:14.479991 containerd[1551]: time="2024-12-13T01:11:14.479936627Z" level=info msg="CreateContainer within sandbox \"530e66fbed498900358aaea1a16372ed38370fe4644622465930b8c311bdb8f2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0a45066ea87ee021bff91afe05c2cf8bf5d27c242e6d2bb77a0ddbf8a48ec3d\"" Dec 13 01:11:14.480491 containerd[1551]: time="2024-12-13T01:11:14.480433192Z" level=info msg="StartContainer for \"f0a45066ea87ee021bff91afe05c2cf8bf5d27c242e6d2bb77a0ddbf8a48ec3d\"" Dec 13 01:11:14.533191 containerd[1551]: time="2024-12-13T01:11:14.533151814Z" level=info msg="StartContainer for \"f0a45066ea87ee021bff91afe05c2cf8bf5d27c242e6d2bb77a0ddbf8a48ec3d\" returns successfully" Dec 13 01:11:17.340594 kubelet[2731]: I1213 01:11:17.339772 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-gjppp" podStartSLOduration=4.135398857 podStartE2EDuration="6.339721577s" podCreationTimestamp="2024-12-13 01:11:11 +0000 UTC" firstStartedPulling="2024-12-13 01:11:12.259478979 +0000 UTC m=+14.313410160" lastFinishedPulling="2024-12-13 01:11:14.463801699 +0000 UTC m=+16.517732880" observedRunningTime="2024-12-13 01:11:15.0802302 +0000 UTC m=+17.134161371" watchObservedRunningTime="2024-12-13 01:11:17.339721577 +0000 UTC m=+19.393652758" Dec 13 01:11:17.340594 kubelet[2731]: I1213 01:11:17.339997 2731 topology_manager.go:215] "Topology Admit Handler" podUID="8cf63954-e97e-45c1-bb13-1b47bbb699cb" podNamespace="calico-system" podName="calico-typha-6b79bcf856-g99kx" Dec 13 01:11:17.383301 kubelet[2731]: I1213 01:11:17.383258 2731 topology_manager.go:215] "Topology Admit Handler" podUID="9510896b-0814-4622-9cc7-1bf1c95421d6" podNamespace="calico-system" podName="calico-node-kh9rl" Dec 13 01:11:17.447963 kubelet[2731]: I1213 01:11:17.447921 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-log-dir\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.447963 kubelet[2731]: I1213 01:11:17.447955 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-flexvol-driver-host\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448151 kubelet[2731]: I1213 01:11:17.447976 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cf63954-e97e-45c1-bb13-1b47bbb699cb-tigera-ca-bundle\") pod \"calico-typha-6b79bcf856-g99kx\" (UID: \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\") " pod="calico-system/calico-typha-6b79bcf856-g99kx" Dec 13 01:11:17.448151 kubelet[2731]: I1213 01:11:17.447997 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-xtables-lock\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448151 kubelet[2731]: I1213 01:11:17.448015 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9510896b-0814-4622-9cc7-1bf1c95421d6-tigera-ca-bundle\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448151 kubelet[2731]: I1213 01:11:17.448031 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-net-dir\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448151 kubelet[2731]: I1213 01:11:17.448057 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xwj\" (UniqueName: \"kubernetes.io/projected/9510896b-0814-4622-9cc7-1bf1c95421d6-kube-api-access-j5xwj\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448307 kubelet[2731]: I1213 01:11:17.448104 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9510896b-0814-4622-9cc7-1bf1c95421d6-node-certs\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448307 kubelet[2731]: I1213 01:11:17.448133 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-var-lib-calico\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448307 kubelet[2731]: I1213 01:11:17.448151 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2n5\" (UniqueName: \"kubernetes.io/projected/8cf63954-e97e-45c1-bb13-1b47bbb699cb-kube-api-access-mp2n5\") pod \"calico-typha-6b79bcf856-g99kx\" (UID: \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\") " pod="calico-system/calico-typha-6b79bcf856-g99kx" Dec 13 01:11:17.448307 kubelet[2731]: I1213 01:11:17.448180 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-lib-modules\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448307 kubelet[2731]: I1213 01:11:17.448216 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-policysync\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448487 kubelet[2731]: I1213 01:11:17.448247 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8cf63954-e97e-45c1-bb13-1b47bbb699cb-typha-certs\") pod \"calico-typha-6b79bcf856-g99kx\" (UID: \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\") " pod="calico-system/calico-typha-6b79bcf856-g99kx" Dec 13 01:11:17.448487 kubelet[2731]: I1213 01:11:17.448263 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-var-run-calico\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.448487 kubelet[2731]: I1213 01:11:17.448281 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-bin-dir\") pod \"calico-node-kh9rl\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " pod="calico-system/calico-node-kh9rl" Dec 13 01:11:17.502668 kubelet[2731]: I1213 01:11:17.502570 2731 topology_manager.go:215] "Topology Admit Handler" podUID="22974a96-e85c-4133-adca-45fb1d4311f1" podNamespace="calico-system" podName="csi-node-driver-bws6x" Dec 13 01:11:17.504672 kubelet[2731]: E1213 01:11:17.504648 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bws6x" podUID="22974a96-e85c-4133-adca-45fb1d4311f1" Dec 13 01:11:17.549716 kubelet[2731]: I1213 01:11:17.549667 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/22974a96-e85c-4133-adca-45fb1d4311f1-varrun\") pod \"csi-node-driver-bws6x\" (UID: \"22974a96-e85c-4133-adca-45fb1d4311f1\") " pod="calico-system/csi-node-driver-bws6x" Dec 13 01:11:17.549866 kubelet[2731]: I1213 01:11:17.549736 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/22974a96-e85c-4133-adca-45fb1d4311f1-registration-dir\") pod \"csi-node-driver-bws6x\" (UID: \"22974a96-e85c-4133-adca-45fb1d4311f1\") " pod="calico-system/csi-node-driver-bws6x" Dec 13 01:11:17.549866 kubelet[2731]: I1213 01:11:17.549787 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22974a96-e85c-4133-adca-45fb1d4311f1-kubelet-dir\") pod \"csi-node-driver-bws6x\" (UID: \"22974a96-e85c-4133-adca-45fb1d4311f1\") " pod="calico-system/csi-node-driver-bws6x" Dec 13 01:11:17.549866 kubelet[2731]: I1213 01:11:17.549814 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqd9\" (UniqueName: \"kubernetes.io/projected/22974a96-e85c-4133-adca-45fb1d4311f1-kube-api-access-nbqd9\") pod \"csi-node-driver-bws6x\" (UID: \"22974a96-e85c-4133-adca-45fb1d4311f1\") " pod="calico-system/csi-node-driver-bws6x" Dec 13 01:11:17.549866 kubelet[2731]: I1213 01:11:17.549850 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/22974a96-e85c-4133-adca-45fb1d4311f1-socket-dir\") pod \"csi-node-driver-bws6x\" (UID: \"22974a96-e85c-4133-adca-45fb1d4311f1\") " pod="calico-system/csi-node-driver-bws6x" Dec 13 01:11:17.577000 kubelet[2731]: E1213 01:11:17.574586 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.577000 kubelet[2731]: W1213 01:11:17.574609 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.577000 kubelet[2731]: E1213 01:11:17.574648 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.579506 kubelet[2731]: E1213 01:11:17.579475 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.579654 kubelet[2731]: W1213 01:11:17.579506 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.579654 kubelet[2731]: E1213 01:11:17.579538 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.581905 kubelet[2731]: E1213 01:11:17.581834 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.581905 kubelet[2731]: W1213 01:11:17.581853 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.581905 kubelet[2731]: E1213 01:11:17.581873 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.648389 kubelet[2731]: E1213 01:11:17.648269 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:17.650359 containerd[1551]: time="2024-12-13T01:11:17.650325458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b79bcf856-g99kx,Uid:8cf63954-e97e-45c1-bb13-1b47bbb699cb,Namespace:calico-system,Attempt:0,}" Dec 13 01:11:17.650914 kubelet[2731]: E1213 01:11:17.650891 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.650914 kubelet[2731]: W1213 01:11:17.650908 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.650988 kubelet[2731]: E1213 01:11:17.650931 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.651224 kubelet[2731]: E1213 01:11:17.651200 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.651224 kubelet[2731]: W1213 01:11:17.651213 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.651286 kubelet[2731]: E1213 01:11:17.651245 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.651577 kubelet[2731]: E1213 01:11:17.651559 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.651577 kubelet[2731]: W1213 01:11:17.651574 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.651679 kubelet[2731]: E1213 01:11:17.651596 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.652240 kubelet[2731]: E1213 01:11:17.651836 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.652240 kubelet[2731]: W1213 01:11:17.651846 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.652240 kubelet[2731]: E1213 01:11:17.651860 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.652240 kubelet[2731]: E1213 01:11:17.652075 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.652240 kubelet[2731]: W1213 01:11:17.652086 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.652240 kubelet[2731]: E1213 01:11:17.652106 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.652504 kubelet[2731]: E1213 01:11:17.652293 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.652504 kubelet[2731]: W1213 01:11:17.652303 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.652504 kubelet[2731]: E1213 01:11:17.652321 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.652590 kubelet[2731]: E1213 01:11:17.652563 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.652590 kubelet[2731]: W1213 01:11:17.652573 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.652674 kubelet[2731]: E1213 01:11:17.652645 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.652833 kubelet[2731]: E1213 01:11:17.652817 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.652833 kubelet[2731]: W1213 01:11:17.652830 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.652925 kubelet[2731]: E1213 01:11:17.652855 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.653057 kubelet[2731]: E1213 01:11:17.653024 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.653057 kubelet[2731]: W1213 01:11:17.653034 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.653207 kubelet[2731]: E1213 01:11:17.653182 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.653518 kubelet[2731]: E1213 01:11:17.653505 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.653518 kubelet[2731]: W1213 01:11:17.653516 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.653624 kubelet[2731]: E1213 01:11:17.653569 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.653723 kubelet[2731]: E1213 01:11:17.653712 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.653723 kubelet[2731]: W1213 01:11:17.653721 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.653815 kubelet[2731]: E1213 01:11:17.653752 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.653958 kubelet[2731]: E1213 01:11:17.653938 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.653958 kubelet[2731]: W1213 01:11:17.653956 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.654044 kubelet[2731]: E1213 01:11:17.653992 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.654134 kubelet[2731]: E1213 01:11:17.654122 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.654134 kubelet[2731]: W1213 01:11:17.654130 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.654211 kubelet[2731]: E1213 01:11:17.654162 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.654349 kubelet[2731]: E1213 01:11:17.654333 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.654349 kubelet[2731]: W1213 01:11:17.654346 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.654441 kubelet[2731]: E1213 01:11:17.654382 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.654603 kubelet[2731]: E1213 01:11:17.654590 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.654641 kubelet[2731]: W1213 01:11:17.654602 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.654641 kubelet[2731]: E1213 01:11:17.654620 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.654841 kubelet[2731]: E1213 01:11:17.654828 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.654874 kubelet[2731]: W1213 01:11:17.654840 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.654929 kubelet[2731]: E1213 01:11:17.654908 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.655054 kubelet[2731]: E1213 01:11:17.655041 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.655054 kubelet[2731]: W1213 01:11:17.655052 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.655162 kubelet[2731]: E1213 01:11:17.655108 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.655287 kubelet[2731]: E1213 01:11:17.655274 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.655317 kubelet[2731]: W1213 01:11:17.655287 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.655338 kubelet[2731]: E1213 01:11:17.655315 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.655533 kubelet[2731]: E1213 01:11:17.655512 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.655533 kubelet[2731]: W1213 01:11:17.655524 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.655654 kubelet[2731]: E1213 01:11:17.655614 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.655736 kubelet[2731]: E1213 01:11:17.655725 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.655761 kubelet[2731]: W1213 01:11:17.655735 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.655761 kubelet[2731]: E1213 01:11:17.655750 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.655927 kubelet[2731]: E1213 01:11:17.655916 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.655927 kubelet[2731]: W1213 01:11:17.655925 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.655991 kubelet[2731]: E1213 01:11:17.655937 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.656222 kubelet[2731]: E1213 01:11:17.656210 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.656257 kubelet[2731]: W1213 01:11:17.656222 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.656257 kubelet[2731]: E1213 01:11:17.656241 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.656476 kubelet[2731]: E1213 01:11:17.656463 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.656511 kubelet[2731]: W1213 01:11:17.656475 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.656511 kubelet[2731]: E1213 01:11:17.656493 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.656767 kubelet[2731]: E1213 01:11:17.656754 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.656767 kubelet[2731]: W1213 01:11:17.656767 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.656820 kubelet[2731]: E1213 01:11:17.656782 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.657251 kubelet[2731]: E1213 01:11:17.657230 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.657251 kubelet[2731]: W1213 01:11:17.657241 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.657317 kubelet[2731]: E1213 01:11:17.657254 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.664512 kubelet[2731]: E1213 01:11:17.664484 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:17.664512 kubelet[2731]: W1213 01:11:17.664507 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:17.665298 kubelet[2731]: E1213 01:11:17.664530 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:17.676424 containerd[1551]: time="2024-12-13T01:11:17.676262053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:17.676424 containerd[1551]: time="2024-12-13T01:11:17.676324871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:17.676424 containerd[1551]: time="2024-12-13T01:11:17.676338847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:17.677258 containerd[1551]: time="2024-12-13T01:11:17.677209926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:17.691603 kubelet[2731]: E1213 01:11:17.691579 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:17.692253 containerd[1551]: time="2024-12-13T01:11:17.692195784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kh9rl,Uid:9510896b-0814-4622-9cc7-1bf1c95421d6,Namespace:calico-system,Attempt:0,}" Dec 13 01:11:17.731628 containerd[1551]: time="2024-12-13T01:11:17.731515340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:17.731628 containerd[1551]: time="2024-12-13T01:11:17.731582416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:17.732110 containerd[1551]: time="2024-12-13T01:11:17.731599799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:17.732110 containerd[1551]: time="2024-12-13T01:11:17.731701711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:17.750847 containerd[1551]: time="2024-12-13T01:11:17.749682541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b79bcf856-g99kx,Uid:8cf63954-e97e-45c1-bb13-1b47bbb699cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf\"" Dec 13 01:11:17.758966 kubelet[2731]: E1213 01:11:17.757886 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:17.770039 containerd[1551]: time="2024-12-13T01:11:17.769740579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:11:17.785768 containerd[1551]: time="2024-12-13T01:11:17.785710377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kh9rl,Uid:9510896b-0814-4622-9cc7-1bf1c95421d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\"" Dec 13 01:11:17.786514 kubelet[2731]: E1213 01:11:17.786478 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:19.032887 kubelet[2731]: E1213 01:11:19.032844 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bws6x" podUID="22974a96-e85c-4133-adca-45fb1d4311f1" Dec 13 01:11:19.458622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601913043.mount: Deactivated successfully. Dec 13 01:11:20.235957 containerd[1551]: time="2024-12-13T01:11:20.235913197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:20.236682 containerd[1551]: time="2024-12-13T01:11:20.236649351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:11:20.237698 containerd[1551]: time="2024-12-13T01:11:20.237662486Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:20.239806 containerd[1551]: time="2024-12-13T01:11:20.239762062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:20.240293 containerd[1551]: time="2024-12-13T01:11:20.240262474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.470480267s" Dec 13 01:11:20.240293 containerd[1551]: time="2024-12-13T01:11:20.240288102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:11:20.240816 containerd[1551]: time="2024-12-13T01:11:20.240793101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:11:20.251609 containerd[1551]: time="2024-12-13T01:11:20.251568728Z" level=info msg="CreateContainer within sandbox \"5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:11:20.265818 containerd[1551]: time="2024-12-13T01:11:20.265775607Z" level=info msg="CreateContainer within sandbox \"5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\"" Dec 13 01:11:20.266303 containerd[1551]: time="2024-12-13T01:11:20.266224581Z" level=info msg="StartContainer for \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\"" Dec 13 01:11:20.334391 containerd[1551]: time="2024-12-13T01:11:20.334171497Z" level=info msg="StartContainer for \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\" returns successfully" Dec 13 01:11:21.038618 kubelet[2731]: E1213 01:11:21.038577 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bws6x" podUID="22974a96-e85c-4133-adca-45fb1d4311f1" Dec 13 01:11:21.086401 kubelet[2731]: E1213 01:11:21.086243 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:21.101266 kubelet[2731]: I1213 01:11:21.101201 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6b79bcf856-g99kx" podStartSLOduration=1.6261911310000001 podStartE2EDuration="4.101158057s" podCreationTimestamp="2024-12-13 01:11:17 +0000 UTC" firstStartedPulling="2024-12-13 01:11:17.765500513 +0000 UTC m=+19.819431694" lastFinishedPulling="2024-12-13 01:11:20.240467449 +0000 UTC m=+22.294398620" observedRunningTime="2024-12-13 01:11:21.101055425 +0000 UTC m=+23.154986606" watchObservedRunningTime="2024-12-13 01:11:21.101158057 +0000 UTC m=+23.155089238" Dec 13 01:11:21.158850 kubelet[2731]: E1213 01:11:21.158816 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.158850 kubelet[2731]: W1213 01:11:21.158840 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.158850 kubelet[2731]: E1213 01:11:21.158861 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.159082 kubelet[2731]: E1213 01:11:21.159057 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.159082 kubelet[2731]: W1213 01:11:21.159067 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.159082 kubelet[2731]: E1213 01:11:21.159082 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.159290 kubelet[2731]: E1213 01:11:21.159270 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.159290 kubelet[2731]: W1213 01:11:21.159280 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.159290 kubelet[2731]: E1213 01:11:21.159290 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.159528 kubelet[2731]: E1213 01:11:21.159510 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.159528 kubelet[2731]: W1213 01:11:21.159521 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.159528 kubelet[2731]: E1213 01:11:21.159530 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.159730 kubelet[2731]: E1213 01:11:21.159708 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.159730 kubelet[2731]: W1213 01:11:21.159719 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.159730 kubelet[2731]: E1213 01:11:21.159728 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.159944 kubelet[2731]: E1213 01:11:21.159934 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.159944 kubelet[2731]: W1213 01:11:21.159943 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.159998 kubelet[2731]: E1213 01:11:21.159952 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.160143 kubelet[2731]: E1213 01:11:21.160126 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.160143 kubelet[2731]: W1213 01:11:21.160134 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.160143 kubelet[2731]: E1213 01:11:21.160144 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.160321 kubelet[2731]: E1213 01:11:21.160312 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.160321 kubelet[2731]: W1213 01:11:21.160319 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.160385 kubelet[2731]: E1213 01:11:21.160328 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.160535 kubelet[2731]: E1213 01:11:21.160524 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.160535 kubelet[2731]: W1213 01:11:21.160533 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.160581 kubelet[2731]: E1213 01:11:21.160542 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.160714 kubelet[2731]: E1213 01:11:21.160705 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.160714 kubelet[2731]: W1213 01:11:21.160713 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.160761 kubelet[2731]: E1213 01:11:21.160724 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.160903 kubelet[2731]: E1213 01:11:21.160893 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.160903 kubelet[2731]: W1213 01:11:21.160901 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.160948 kubelet[2731]: E1213 01:11:21.160911 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.161131 kubelet[2731]: E1213 01:11:21.161120 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.161131 kubelet[2731]: W1213 01:11:21.161128 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.161191 kubelet[2731]: E1213 01:11:21.161137 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.161339 kubelet[2731]: E1213 01:11:21.161328 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.161379 kubelet[2731]: W1213 01:11:21.161338 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.161379 kubelet[2731]: E1213 01:11:21.161347 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.161552 kubelet[2731]: E1213 01:11:21.161542 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.161552 kubelet[2731]: W1213 01:11:21.161550 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.161591 kubelet[2731]: E1213 01:11:21.161560 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.161734 kubelet[2731]: E1213 01:11:21.161725 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.161734 kubelet[2731]: W1213 01:11:21.161733 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.161776 kubelet[2731]: E1213 01:11:21.161742 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.194725 kubelet[2731]: E1213 01:11:21.194698 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.194796 kubelet[2731]: W1213 01:11:21.194725 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.194796 kubelet[2731]: E1213 01:11:21.194754 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.195017 kubelet[2731]: E1213 01:11:21.195001 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.195017 kubelet[2731]: W1213 01:11:21.195013 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.195071 kubelet[2731]: E1213 01:11:21.195031 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.195331 kubelet[2731]: E1213 01:11:21.195306 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.195331 kubelet[2731]: W1213 01:11:21.195323 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.195517 kubelet[2731]: E1213 01:11:21.195349 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.195639 kubelet[2731]: E1213 01:11:21.195622 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.195639 kubelet[2731]: W1213 01:11:21.195635 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.195706 kubelet[2731]: E1213 01:11:21.195657 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.195889 kubelet[2731]: E1213 01:11:21.195866 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.195889 kubelet[2731]: W1213 01:11:21.195879 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.195956 kubelet[2731]: E1213 01:11:21.195897 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.196124 kubelet[2731]: E1213 01:11:21.196112 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.196124 kubelet[2731]: W1213 01:11:21.196121 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.196192 kubelet[2731]: E1213 01:11:21.196136 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.196388 kubelet[2731]: E1213 01:11:21.196356 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.196388 kubelet[2731]: W1213 01:11:21.196385 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.196472 kubelet[2731]: E1213 01:11:21.196412 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.196647 kubelet[2731]: E1213 01:11:21.196630 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.196647 kubelet[2731]: W1213 01:11:21.196641 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.196728 kubelet[2731]: E1213 01:11:21.196673 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.196872 kubelet[2731]: E1213 01:11:21.196856 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.196872 kubelet[2731]: W1213 01:11:21.196868 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.196924 kubelet[2731]: E1213 01:11:21.196897 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.197069 kubelet[2731]: E1213 01:11:21.197054 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.197069 kubelet[2731]: W1213 01:11:21.197064 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.197113 kubelet[2731]: E1213 01:11:21.197080 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.197292 kubelet[2731]: E1213 01:11:21.197277 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.197292 kubelet[2731]: W1213 01:11:21.197288 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.197351 kubelet[2731]: E1213 01:11:21.197306 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.197513 kubelet[2731]: E1213 01:11:21.197498 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.197513 kubelet[2731]: W1213 01:11:21.197510 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.197564 kubelet[2731]: E1213 01:11:21.197527 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.197777 kubelet[2731]: E1213 01:11:21.197760 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.197777 kubelet[2731]: W1213 01:11:21.197774 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.197834 kubelet[2731]: E1213 01:11:21.197791 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.198017 kubelet[2731]: E1213 01:11:21.197998 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.198017 kubelet[2731]: W1213 01:11:21.198014 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.198070 kubelet[2731]: E1213 01:11:21.198036 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.198288 kubelet[2731]: E1213 01:11:21.198271 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.198288 kubelet[2731]: W1213 01:11:21.198284 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.198354 kubelet[2731]: E1213 01:11:21.198303 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.198560 kubelet[2731]: E1213 01:11:21.198547 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.198590 kubelet[2731]: W1213 01:11:21.198559 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.198590 kubelet[2731]: E1213 01:11:21.198578 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.198863 kubelet[2731]: E1213 01:11:21.198846 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.198863 kubelet[2731]: W1213 01:11:21.198858 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.198933 kubelet[2731]: E1213 01:11:21.198875 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.199085 kubelet[2731]: E1213 01:11:21.199074 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:11:21.199085 kubelet[2731]: W1213 01:11:21.199083 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:11:21.199133 kubelet[2731]: E1213 01:11:21.199094 2731 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:11:21.714102 containerd[1551]: time="2024-12-13T01:11:21.714045173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:21.715094 containerd[1551]: time="2024-12-13T01:11:21.714959853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:11:21.716273 containerd[1551]: time="2024-12-13T01:11:21.716223638Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:21.718507 containerd[1551]: time="2024-12-13T01:11:21.718464600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:21.719108 containerd[1551]: time="2024-12-13T01:11:21.719077361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.478255366s" Dec 13 01:11:21.719145 containerd[1551]: time="2024-12-13T01:11:21.719112658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:11:21.720704 containerd[1551]: time="2024-12-13T01:11:21.720670716Z" level=info msg="CreateContainer within sandbox \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:11:21.738258 containerd[1551]: time="2024-12-13T01:11:21.738182232Z" level=info msg="CreateContainer within sandbox \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a\"" Dec 13 01:11:21.738786 containerd[1551]: time="2024-12-13T01:11:21.738752614Z" level=info msg="StartContainer for \"669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a\"" Dec 13 01:11:21.958935 containerd[1551]: time="2024-12-13T01:11:21.958877091Z" level=info msg="StartContainer for \"669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a\" returns successfully" Dec 13 01:11:21.986097 containerd[1551]: time="2024-12-13T01:11:21.984487411Z" level=info msg="shim disconnected" id=669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a namespace=k8s.io Dec 13 01:11:21.986097 containerd[1551]: time="2024-12-13T01:11:21.986004983Z" level=warning msg="cleaning up after shim disconnected" id=669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a namespace=k8s.io Dec 13 01:11:21.986097 containerd[1551]: time="2024-12-13T01:11:21.986015352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:11:22.088723 kubelet[2731]: E1213 01:11:22.088680 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:22.088723 kubelet[2731]: E1213 01:11:22.088720 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:22.089630 containerd[1551]: time="2024-12-13T01:11:22.089263039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:11:22.248342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a-rootfs.mount: Deactivated successfully. Dec 13 01:11:23.032799 kubelet[2731]: E1213 01:11:23.032753 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bws6x" podUID="22974a96-e85c-4133-adca-45fb1d4311f1" Dec 13 01:11:23.090219 kubelet[2731]: E1213 01:11:23.090186 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:25.033045 kubelet[2731]: E1213 01:11:25.032976 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bws6x" podUID="22974a96-e85c-4133-adca-45fb1d4311f1" Dec 13 01:11:25.331073 containerd[1551]: time="2024-12-13T01:11:25.330958107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:25.332034 containerd[1551]: time="2024-12-13T01:11:25.331991720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:11:25.333164 containerd[1551]: time="2024-12-13T01:11:25.333142561Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:25.335309 containerd[1551]: time="2024-12-13T01:11:25.335267022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:25.335872 containerd[1551]: time="2024-12-13T01:11:25.335847012Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.246549358s" Dec 13 01:11:25.335909 containerd[1551]: time="2024-12-13T01:11:25.335872891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:11:25.337256 containerd[1551]: time="2024-12-13T01:11:25.337233747Z" level=info msg="CreateContainer within sandbox \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:11:25.458979 containerd[1551]: time="2024-12-13T01:11:25.458912211Z" level=info msg="CreateContainer within sandbox \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656\"" Dec 13 01:11:25.459593 containerd[1551]: time="2024-12-13T01:11:25.459505175Z" level=info msg="StartContainer for \"cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656\"" Dec 13 01:11:25.518846 containerd[1551]: time="2024-12-13T01:11:25.518803567Z" level=info msg="StartContainer for \"cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656\" returns successfully" Dec 13 01:11:26.097055 kubelet[2731]: E1213 01:11:26.097029 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:26.802934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656-rootfs.mount: Deactivated successfully. Dec 13 01:11:26.805329 containerd[1551]: time="2024-12-13T01:11:26.805230200Z" level=info msg="shim disconnected" id=cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656 namespace=k8s.io Dec 13 01:11:26.805329 containerd[1551]: time="2024-12-13T01:11:26.805315590Z" level=warning msg="cleaning up after shim disconnected" id=cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656 namespace=k8s.io Dec 13 01:11:26.805329 containerd[1551]: time="2024-12-13T01:11:26.805324998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:11:26.810137 kubelet[2731]: I1213 01:11:26.809360 2731 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:11:26.832966 kubelet[2731]: I1213 01:11:26.832917 2731 topology_manager.go:215] "Topology Admit Handler" podUID="28737369-3c2e-42da-8f56-354717ebe21b" podNamespace="kube-system" podName="coredns-76f75df574-x6v2k" Dec 13 01:11:26.833124 kubelet[2731]: I1213 01:11:26.833095 2731 topology_manager.go:215] "Topology Admit Handler" podUID="f2a10eb6-3688-422c-b65b-01a7d73ed991" podNamespace="calico-apiserver" podName="calico-apiserver-7f8948c8d7-gfxjh" Dec 13 01:11:26.837348 kubelet[2731]: I1213 01:11:26.836641 2731 topology_manager.go:215] "Topology Admit Handler" podUID="f6bdff36-140a-401a-9765-907c2bbf003f" podNamespace="calico-system" podName="calico-kube-controllers-54c9b9587d-vnvps" Dec 13 01:11:26.837348 kubelet[2731]: I1213 01:11:26.836781 2731 topology_manager.go:215] "Topology Admit Handler" podUID="4fa073bd-a2a4-4b4c-83a5-815ebeb43007" podNamespace="kube-system" podName="coredns-76f75df574-8lhww" Dec 13 01:11:26.838167 kubelet[2731]: I1213 01:11:26.838148 2731 topology_manager.go:215] "Topology Admit Handler" podUID="ea139758-c514-433e-ba8c-28ff42ef6f58" podNamespace="calico-apiserver" podName="calico-apiserver-7f8948c8d7-fq2q4" Dec 13 01:11:26.937130 kubelet[2731]: I1213 01:11:26.937056 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6bdff36-140a-401a-9765-907c2bbf003f-tigera-ca-bundle\") pod \"calico-kube-controllers-54c9b9587d-vnvps\" (UID: \"f6bdff36-140a-401a-9765-907c2bbf003f\") " pod="calico-system/calico-kube-controllers-54c9b9587d-vnvps" Dec 13 01:11:26.937130 kubelet[2731]: I1213 01:11:26.937105 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbh48\" (UniqueName: \"kubernetes.io/projected/f6bdff36-140a-401a-9765-907c2bbf003f-kube-api-access-fbh48\") pod \"calico-kube-controllers-54c9b9587d-vnvps\" (UID: \"f6bdff36-140a-401a-9765-907c2bbf003f\") " pod="calico-system/calico-kube-controllers-54c9b9587d-vnvps" Dec 13 01:11:26.937314 kubelet[2731]: I1213 01:11:26.937162 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x22jt\" (UniqueName: \"kubernetes.io/projected/4fa073bd-a2a4-4b4c-83a5-815ebeb43007-kube-api-access-x22jt\") pod \"coredns-76f75df574-8lhww\" (UID: \"4fa073bd-a2a4-4b4c-83a5-815ebeb43007\") " pod="kube-system/coredns-76f75df574-8lhww" Dec 13 01:11:26.937314 kubelet[2731]: I1213 01:11:26.937205 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28737369-3c2e-42da-8f56-354717ebe21b-config-volume\") pod \"coredns-76f75df574-x6v2k\" (UID: \"28737369-3c2e-42da-8f56-354717ebe21b\") " pod="kube-system/coredns-76f75df574-x6v2k" Dec 13 01:11:26.937314 kubelet[2731]: I1213 01:11:26.937240 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2a10eb6-3688-422c-b65b-01a7d73ed991-calico-apiserver-certs\") pod \"calico-apiserver-7f8948c8d7-gfxjh\" (UID: \"f2a10eb6-3688-422c-b65b-01a7d73ed991\") " pod="calico-apiserver/calico-apiserver-7f8948c8d7-gfxjh" Dec 13 01:11:26.937314 kubelet[2731]: I1213 01:11:26.937258 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng9gd\" (UniqueName: \"kubernetes.io/projected/f2a10eb6-3688-422c-b65b-01a7d73ed991-kube-api-access-ng9gd\") pod \"calico-apiserver-7f8948c8d7-gfxjh\" (UID: \"f2a10eb6-3688-422c-b65b-01a7d73ed991\") " pod="calico-apiserver/calico-apiserver-7f8948c8d7-gfxjh" Dec 13 01:11:26.937314 kubelet[2731]: I1213 01:11:26.937285 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fa073bd-a2a4-4b4c-83a5-815ebeb43007-config-volume\") pod \"coredns-76f75df574-8lhww\" (UID: \"4fa073bd-a2a4-4b4c-83a5-815ebeb43007\") " pod="kube-system/coredns-76f75df574-8lhww" Dec 13 01:11:26.937470 kubelet[2731]: I1213 01:11:26.937319 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwspv\" (UniqueName: \"kubernetes.io/projected/28737369-3c2e-42da-8f56-354717ebe21b-kube-api-access-fwspv\") pod \"coredns-76f75df574-x6v2k\" (UID: \"28737369-3c2e-42da-8f56-354717ebe21b\") " pod="kube-system/coredns-76f75df574-x6v2k" Dec 13 01:11:26.937470 kubelet[2731]: I1213 01:11:26.937345 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsgzz\" (UniqueName: \"kubernetes.io/projected/ea139758-c514-433e-ba8c-28ff42ef6f58-kube-api-access-nsgzz\") pod \"calico-apiserver-7f8948c8d7-fq2q4\" (UID: \"ea139758-c514-433e-ba8c-28ff42ef6f58\") " pod="calico-apiserver/calico-apiserver-7f8948c8d7-fq2q4" Dec 13 01:11:26.937470 kubelet[2731]: I1213 01:11:26.937391 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea139758-c514-433e-ba8c-28ff42ef6f58-calico-apiserver-certs\") pod \"calico-apiserver-7f8948c8d7-fq2q4\" (UID: \"ea139758-c514-433e-ba8c-28ff42ef6f58\") " pod="calico-apiserver/calico-apiserver-7f8948c8d7-fq2q4" Dec 13 01:11:27.035582 containerd[1551]: time="2024-12-13T01:11:27.035531437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bws6x,Uid:22974a96-e85c-4133-adca-45fb1d4311f1,Namespace:calico-system,Attempt:0,}" Dec 13 01:11:27.100336 kubelet[2731]: E1213 01:11:27.100232 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:27.101951 containerd[1551]: time="2024-12-13T01:11:27.101912789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:11:27.122203 containerd[1551]: time="2024-12-13T01:11:27.122139650Z" level=error msg="Failed to destroy network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.122631 containerd[1551]: time="2024-12-13T01:11:27.122588834Z" level=error msg="encountered an error cleaning up failed sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.122679 containerd[1551]: time="2024-12-13T01:11:27.122636544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bws6x,Uid:22974a96-e85c-4133-adca-45fb1d4311f1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.123250 kubelet[2731]: E1213 01:11:27.122927 2731 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.123250 kubelet[2731]: E1213 01:11:27.122999 2731 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bws6x" Dec 13 01:11:27.123250 kubelet[2731]: E1213 01:11:27.123026 2731 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bws6x" Dec 13 01:11:27.123491 kubelet[2731]: E1213 01:11:27.123107 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bws6x_calico-system(22974a96-e85c-4133-adca-45fb1d4311f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bws6x_calico-system(22974a96-e85c-4133-adca-45fb1d4311f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bws6x" podUID="22974a96-e85c-4133-adca-45fb1d4311f1" Dec 13 01:11:27.141838 containerd[1551]: time="2024-12-13T01:11:27.141798234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8948c8d7-gfxjh,Uid:f2a10eb6-3688-422c-b65b-01a7d73ed991,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:11:27.147101 kubelet[2731]: E1213 01:11:27.147065 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:27.148023 containerd[1551]: time="2024-12-13T01:11:27.147987711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x6v2k,Uid:28737369-3c2e-42da-8f56-354717ebe21b,Namespace:kube-system,Attempt:0,}" Dec 13 01:11:27.149107 kubelet[2731]: E1213 01:11:27.149087 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:27.149496 containerd[1551]: time="2024-12-13T01:11:27.149461579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8lhww,Uid:4fa073bd-a2a4-4b4c-83a5-815ebeb43007,Namespace:kube-system,Attempt:0,}" Dec 13 01:11:27.155301 containerd[1551]: time="2024-12-13T01:11:27.155253017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c9b9587d-vnvps,Uid:f6bdff36-140a-401a-9765-907c2bbf003f,Namespace:calico-system,Attempt:0,}" Dec 13 01:11:27.155709 containerd[1551]: time="2024-12-13T01:11:27.155669839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8948c8d7-fq2q4,Uid:ea139758-c514-433e-ba8c-28ff42ef6f58,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:11:27.298162 containerd[1551]: time="2024-12-13T01:11:27.298041743Z" level=error msg="Failed to destroy network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.299448 containerd[1551]: time="2024-12-13T01:11:27.299015782Z" level=error msg="encountered an error cleaning up failed sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.299448 containerd[1551]: time="2024-12-13T01:11:27.299154252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8948c8d7-gfxjh,Uid:f2a10eb6-3688-422c-b65b-01a7d73ed991,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.299607 kubelet[2731]: E1213 01:11:27.299458 2731 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.299607 kubelet[2731]: E1213 01:11:27.299513 2731 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f8948c8d7-gfxjh" Dec 13 01:11:27.299607 kubelet[2731]: E1213 01:11:27.299533 2731 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f8948c8d7-gfxjh" Dec 13 01:11:27.299731 kubelet[2731]: E1213 01:11:27.299587 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f8948c8d7-gfxjh_calico-apiserver(f2a10eb6-3688-422c-b65b-01a7d73ed991)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f8948c8d7-gfxjh_calico-apiserver(f2a10eb6-3688-422c-b65b-01a7d73ed991)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f8948c8d7-gfxjh" podUID="f2a10eb6-3688-422c-b65b-01a7d73ed991" Dec 13 01:11:27.303384 containerd[1551]: time="2024-12-13T01:11:27.303313304Z" level=error msg="Failed to destroy network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.303905 containerd[1551]: time="2024-12-13T01:11:27.303884838Z" level=error msg="encountered an error cleaning up failed sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.304043 containerd[1551]: time="2024-12-13T01:11:27.303988662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c9b9587d-vnvps,Uid:f6bdff36-140a-401a-9765-907c2bbf003f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.304659 kubelet[2731]: E1213 01:11:27.304265 2731 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.304659 kubelet[2731]: E1213 01:11:27.304306 2731 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54c9b9587d-vnvps" Dec 13 01:11:27.304659 kubelet[2731]: E1213 01:11:27.304324 2731 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54c9b9587d-vnvps" Dec 13 01:11:27.304777 kubelet[2731]: E1213 01:11:27.304455 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54c9b9587d-vnvps_calico-system(f6bdff36-140a-401a-9765-907c2bbf003f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54c9b9587d-vnvps_calico-system(f6bdff36-140a-401a-9765-907c2bbf003f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54c9b9587d-vnvps" podUID="f6bdff36-140a-401a-9765-907c2bbf003f" Dec 13 01:11:27.312517 containerd[1551]: time="2024-12-13T01:11:27.312163096Z" level=error msg="Failed to destroy network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.312277 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:42512.service - OpenSSH per-connection server daemon (10.0.0.1:42512). Dec 13 01:11:27.312708 containerd[1551]: time="2024-12-13T01:11:27.312612060Z" level=error msg="encountered an error cleaning up failed sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.312708 containerd[1551]: time="2024-12-13T01:11:27.312664118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x6v2k,Uid:28737369-3c2e-42da-8f56-354717ebe21b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.313385 kubelet[2731]: E1213 01:11:27.312904 2731 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.313385 kubelet[2731]: E1213 01:11:27.312987 2731 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-x6v2k" Dec 13 01:11:27.313385 kubelet[2731]: E1213 01:11:27.313012 2731 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-x6v2k" Dec 13 01:11:27.313604 kubelet[2731]: E1213 01:11:27.313073 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-x6v2k_kube-system(28737369-3c2e-42da-8f56-354717ebe21b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-x6v2k_kube-system(28737369-3c2e-42da-8f56-354717ebe21b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-x6v2k" podUID="28737369-3c2e-42da-8f56-354717ebe21b" Dec 13 01:11:27.317721 containerd[1551]: time="2024-12-13T01:11:27.317588567Z" level=error msg="Failed to destroy network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.318078 containerd[1551]: time="2024-12-13T01:11:27.318056577Z" level=error msg="encountered an error cleaning up failed sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.318654 containerd[1551]: time="2024-12-13T01:11:27.318154590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8lhww,Uid:4fa073bd-a2a4-4b4c-83a5-815ebeb43007,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.318756 kubelet[2731]: E1213 01:11:27.318378 2731 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.318756 kubelet[2731]: E1213 01:11:27.318422 2731 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-8lhww" Dec 13 01:11:27.318756 kubelet[2731]: E1213 01:11:27.318445 2731 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-8lhww" Dec 13 01:11:27.318868 kubelet[2731]: E1213 01:11:27.318491 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-8lhww_kube-system(4fa073bd-a2a4-4b4c-83a5-815ebeb43007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-8lhww_kube-system(4fa073bd-a2a4-4b4c-83a5-815ebeb43007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-8lhww" podUID="4fa073bd-a2a4-4b4c-83a5-815ebeb43007" Dec 13 01:11:27.325084 containerd[1551]: time="2024-12-13T01:11:27.325041816Z" level=error msg="Failed to destroy network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.325416 containerd[1551]: time="2024-12-13T01:11:27.325388477Z" level=error msg="encountered an error cleaning up failed sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.325537 containerd[1551]: time="2024-12-13T01:11:27.325432370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8948c8d7-fq2q4,Uid:ea139758-c514-433e-ba8c-28ff42ef6f58,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.325647 kubelet[2731]: E1213 01:11:27.325628 2731 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:27.325697 kubelet[2731]: E1213 01:11:27.325673 2731 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f8948c8d7-fq2q4" Dec 13 01:11:27.325697 kubelet[2731]: E1213 01:11:27.325692 2731 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f8948c8d7-fq2q4" Dec 13 01:11:27.325758 kubelet[2731]: E1213 01:11:27.325747 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f8948c8d7-fq2q4_calico-apiserver(ea139758-c514-433e-ba8c-28ff42ef6f58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f8948c8d7-fq2q4_calico-apiserver(ea139758-c514-433e-ba8c-28ff42ef6f58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f8948c8d7-fq2q4" podUID="ea139758-c514-433e-ba8c-28ff42ef6f58" Dec 13 01:11:27.350940 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 42512 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:27.352946 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:27.357471 systemd-logind[1532]: New session 8 of user core. Dec 13 01:11:27.364611 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:11:27.477465 sshd[3699]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:27.481793 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:42512.service: Deactivated successfully. Dec 13 01:11:27.484215 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:11:27.484313 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:11:27.485282 systemd-logind[1532]: Removed session 8. Dec 13 01:11:27.803916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3-shm.mount: Deactivated successfully. Dec 13 01:11:28.101801 kubelet[2731]: I1213 01:11:28.101714 2731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:28.103286 containerd[1551]: time="2024-12-13T01:11:28.102476484Z" level=info msg="StopPodSandbox for \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\"" Dec 13 01:11:28.103286 containerd[1551]: time="2024-12-13T01:11:28.102645961Z" level=info msg="Ensure that sandbox fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1 in task-service has been cleanup successfully" Dec 13 01:11:28.103633 kubelet[2731]: I1213 01:11:28.102621 2731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:28.103835 containerd[1551]: time="2024-12-13T01:11:28.103796774Z" level=info msg="StopPodSandbox for \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\"" Dec 13 01:11:28.104080 containerd[1551]: time="2024-12-13T01:11:28.103987501Z" level=info msg="Ensure that sandbox ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3 in task-service has been cleanup successfully" Dec 13 01:11:28.104429 kubelet[2731]: I1213 01:11:28.104400 2731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:28.104946 containerd[1551]: time="2024-12-13T01:11:28.104919031Z" level=info msg="StopPodSandbox for \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\"" Dec 13 01:11:28.105439 containerd[1551]: time="2024-12-13T01:11:28.105109248Z" level=info msg="Ensure that sandbox 56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3 in task-service has been cleanup successfully" Dec 13 01:11:28.108403 kubelet[2731]: I1213 01:11:28.106816 2731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:28.109360 containerd[1551]: time="2024-12-13T01:11:28.109009263Z" level=info msg="StopPodSandbox for \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\"" Dec 13 01:11:28.109360 containerd[1551]: time="2024-12-13T01:11:28.109282586Z" level=info msg="Ensure that sandbox 56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c in task-service has been cleanup successfully" Dec 13 01:11:28.126043 kubelet[2731]: I1213 01:11:28.125735 2731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:28.128463 containerd[1551]: time="2024-12-13T01:11:28.127698202Z" level=info msg="StopPodSandbox for \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\"" Dec 13 01:11:28.128463 containerd[1551]: time="2024-12-13T01:11:28.127865646Z" level=info msg="Ensure that sandbox 2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d in task-service has been cleanup successfully" Dec 13 01:11:28.131259 kubelet[2731]: I1213 01:11:28.131215 2731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:28.131898 containerd[1551]: time="2024-12-13T01:11:28.131866340Z" level=info msg="StopPodSandbox for \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\"" Dec 13 01:11:28.132075 containerd[1551]: time="2024-12-13T01:11:28.132055355Z" level=info msg="Ensure that sandbox 5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9 in task-service has been cleanup successfully" Dec 13 01:11:28.173210 containerd[1551]: time="2024-12-13T01:11:28.173158511Z" level=error msg="StopPodSandbox for \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\" failed" error="failed to destroy network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:28.174653 kubelet[2731]: E1213 01:11:28.174551 2731 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:28.174653 kubelet[2731]: E1213 01:11:28.174639 2731 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1"} Dec 13 01:11:28.174847 kubelet[2731]: E1213 01:11:28.174680 2731 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea139758-c514-433e-ba8c-28ff42ef6f58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:11:28.174847 kubelet[2731]: E1213 01:11:28.174715 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea139758-c514-433e-ba8c-28ff42ef6f58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f8948c8d7-fq2q4" podUID="ea139758-c514-433e-ba8c-28ff42ef6f58" Dec 13 01:11:28.176916 containerd[1551]: time="2024-12-13T01:11:28.176871324Z" level=error msg="StopPodSandbox for \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\" failed" error="failed to destroy network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:28.177596 containerd[1551]: time="2024-12-13T01:11:28.176875191Z" level=error msg="StopPodSandbox for \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\" failed" error="failed to destroy network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:28.177657 kubelet[2731]: E1213 01:11:28.177230 2731 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:28.177657 kubelet[2731]: E1213 01:11:28.177264 2731 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3"} Dec 13 01:11:28.177657 kubelet[2731]: E1213 01:11:28.177306 2731 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2a10eb6-3688-422c-b65b-01a7d73ed991\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:11:28.177657 kubelet[2731]: E1213 01:11:28.177380 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2a10eb6-3688-422c-b65b-01a7d73ed991\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f8948c8d7-gfxjh" podUID="f2a10eb6-3688-422c-b65b-01a7d73ed991" Dec 13 01:11:28.177868 kubelet[2731]: E1213 01:11:28.177422 2731 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:28.177868 kubelet[2731]: E1213 01:11:28.177481 2731 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c"} Dec 13 01:11:28.177868 kubelet[2731]: E1213 01:11:28.177533 2731 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6bdff36-140a-401a-9765-907c2bbf003f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:11:28.177868 kubelet[2731]: E1213 01:11:28.177579 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6bdff36-140a-401a-9765-907c2bbf003f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54c9b9587d-vnvps" podUID="f6bdff36-140a-401a-9765-907c2bbf003f" Dec 13 01:11:28.181024 containerd[1551]: time="2024-12-13T01:11:28.180977926Z" level=error msg="StopPodSandbox for \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\" failed" error="failed to destroy network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:28.181418 kubelet[2731]: E1213 01:11:28.181227 2731 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:28.181418 kubelet[2731]: E1213 01:11:28.181257 2731 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3"} Dec 13 01:11:28.181418 kubelet[2731]: E1213 01:11:28.181292 2731 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22974a96-e85c-4133-adca-45fb1d4311f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:11:28.181418 kubelet[2731]: E1213 01:11:28.181321 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22974a96-e85c-4133-adca-45fb1d4311f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bws6x" podUID="22974a96-e85c-4133-adca-45fb1d4311f1" Dec 13 01:11:28.191765 containerd[1551]: time="2024-12-13T01:11:28.191719119Z" level=error msg="StopPodSandbox for \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\" failed" error="failed to destroy network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:28.192014 kubelet[2731]: E1213 01:11:28.191992 2731 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:28.192073 kubelet[2731]: E1213 01:11:28.192036 2731 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9"} Dec 13 01:11:28.192099 kubelet[2731]: E1213 01:11:28.192071 2731 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28737369-3c2e-42da-8f56-354717ebe21b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:11:28.192169 kubelet[2731]: E1213 01:11:28.192150 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28737369-3c2e-42da-8f56-354717ebe21b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-x6v2k" podUID="28737369-3c2e-42da-8f56-354717ebe21b" Dec 13 01:11:28.193511 containerd[1551]: time="2024-12-13T01:11:28.193467182Z" level=error msg="StopPodSandbox for \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\" failed" error="failed to destroy network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:11:28.193611 kubelet[2731]: E1213 01:11:28.193592 2731 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:28.193664 kubelet[2731]: E1213 01:11:28.193618 2731 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d"} Dec 13 01:11:28.193664 kubelet[2731]: E1213 01:11:28.193647 2731 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fa073bd-a2a4-4b4c-83a5-815ebeb43007\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:11:28.193759 kubelet[2731]: E1213 01:11:28.193667 2731 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fa073bd-a2a4-4b4c-83a5-815ebeb43007\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-8lhww" podUID="4fa073bd-a2a4-4b4c-83a5-815ebeb43007" Dec 13 01:11:30.973922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799969045.mount: Deactivated successfully. Dec 13 01:11:31.794046 containerd[1551]: time="2024-12-13T01:11:31.793978299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:31.795056 containerd[1551]: time="2024-12-13T01:11:31.794996420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:11:31.796446 containerd[1551]: time="2024-12-13T01:11:31.796410806Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:31.799119 containerd[1551]: time="2024-12-13T01:11:31.799081881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:31.799900 containerd[1551]: time="2024-12-13T01:11:31.799839213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.697880007s" Dec 13 01:11:31.799953 containerd[1551]: time="2024-12-13T01:11:31.799895899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:11:31.810336 containerd[1551]: time="2024-12-13T01:11:31.810266661Z" level=info msg="CreateContainer within sandbox \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:11:31.828053 containerd[1551]: time="2024-12-13T01:11:31.828004406Z" level=info msg="CreateContainer within sandbox \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\"" Dec 13 01:11:31.828765 containerd[1551]: time="2024-12-13T01:11:31.828551173Z" level=info msg="StartContainer for \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\"" Dec 13 01:11:32.149928 containerd[1551]: time="2024-12-13T01:11:32.149793380Z" level=info msg="StartContainer for \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\" returns successfully" Dec 13 01:11:32.175736 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:11:32.175893 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:11:32.490579 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:59980.service - OpenSSH per-connection server daemon (10.0.0.1:59980). Dec 13 01:11:32.522815 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 59980 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:32.524492 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:32.528619 systemd-logind[1532]: New session 9 of user core. Dec 13 01:11:32.537614 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:11:32.710589 sshd[3919]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:32.715069 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:59980.service: Deactivated successfully. Dec 13 01:11:32.718078 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:11:32.718452 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:11:32.719749 systemd-logind[1532]: Removed session 9. Dec 13 01:11:33.155246 kubelet[2731]: E1213 01:11:33.155198 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:33.910401 kernel: bpftool[4070]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:11:34.156833 kubelet[2731]: I1213 01:11:34.156629 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:11:34.159227 kubelet[2731]: E1213 01:11:34.159181 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:34.170133 systemd-networkd[1241]: vxlan.calico: Link UP Dec 13 01:11:34.170144 systemd-networkd[1241]: vxlan.calico: Gained carrier Dec 13 01:11:35.913603 systemd-networkd[1241]: vxlan.calico: Gained IPv6LL Dec 13 01:11:37.719624 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:59988.service - OpenSSH per-connection server daemon (10.0.0.1:59988). Dec 13 01:11:37.751013 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 59988 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:37.752954 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:37.756872 systemd-logind[1532]: New session 10 of user core. Dec 13 01:11:37.761644 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:11:37.893900 sshd[4145]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:37.897716 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:59988.service: Deactivated successfully. Dec 13 01:11:37.900109 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:11:37.900243 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:11:37.901501 systemd-logind[1532]: Removed session 10. Dec 13 01:11:39.032976 containerd[1551]: time="2024-12-13T01:11:39.032932705Z" level=info msg="StopPodSandbox for \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\"" Dec 13 01:11:39.033506 containerd[1551]: time="2024-12-13T01:11:39.033023896Z" level=info msg="StopPodSandbox for \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\"" Dec 13 01:11:39.033506 containerd[1551]: time="2024-12-13T01:11:39.033100710Z" level=info msg="StopPodSandbox for \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\"" Dec 13 01:11:39.177420 kubelet[2731]: I1213 01:11:39.176051 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-kh9rl" podStartSLOduration=8.162971996 podStartE2EDuration="22.176004629s" podCreationTimestamp="2024-12-13 01:11:17 +0000 UTC" firstStartedPulling="2024-12-13 01:11:17.787151167 +0000 UTC m=+19.841082348" lastFinishedPulling="2024-12-13 01:11:31.8001838 +0000 UTC m=+33.854114981" observedRunningTime="2024-12-13 01:11:33.198299212 +0000 UTC m=+35.252230413" watchObservedRunningTime="2024-12-13 01:11:39.176004629 +0000 UTC m=+41.229935810" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.175 [INFO][4210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.176 [INFO][4210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" iface="eth0" netns="/var/run/netns/cni-b5c560cd-6d6f-fede-baa3-bf054bbd73f1" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.176 [INFO][4210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" iface="eth0" netns="/var/run/netns/cni-b5c560cd-6d6f-fede-baa3-bf054bbd73f1" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" iface="eth0" netns="/var/run/netns/cni-b5c560cd-6d6f-fede-baa3-bf054bbd73f1" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.226 [INFO][4233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.227 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.227 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.233 [WARNING][4233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.233 [INFO][4233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.234 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:39.240717 containerd[1551]: 2024-12-13 01:11:39.237 [INFO][4210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:39.241318 containerd[1551]: time="2024-12-13T01:11:39.240916600Z" level=info msg="TearDown network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\" successfully" Dec 13 01:11:39.241318 containerd[1551]: time="2024-12-13T01:11:39.240954050Z" level=info msg="StopPodSandbox for \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\" returns successfully" Dec 13 01:11:39.242758 containerd[1551]: time="2024-12-13T01:11:39.241767827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c9b9587d-vnvps,Uid:f6bdff36-140a-401a-9765-907c2bbf003f,Namespace:calico-system,Attempt:1,}" Dec 13 01:11:39.243966 systemd[1]: run-netns-cni\x2db5c560cd\x2d6d6f\x2dfede\x2dbaa3\x2dbf054bbd73f1.mount: Deactivated successfully. Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.175 [INFO][4211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.175 [INFO][4211] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" iface="eth0" netns="/var/run/netns/cni-56dc2a9c-ece0-f0b8-8cde-a0ffc69704e3" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.176 [INFO][4211] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" iface="eth0" netns="/var/run/netns/cni-56dc2a9c-ece0-f0b8-8cde-a0ffc69704e3" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4211] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" iface="eth0" netns="/var/run/netns/cni-56dc2a9c-ece0-f0b8-8cde-a0ffc69704e3" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.226 [INFO][4235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.227 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.234 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.239 [WARNING][4235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.239 [INFO][4235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.242 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:39.247496 containerd[1551]: 2024-12-13 01:11:39.245 [INFO][4211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:39.250472 containerd[1551]: time="2024-12-13T01:11:39.250446293Z" level=info msg="TearDown network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\" successfully" Dec 13 01:11:39.250472 containerd[1551]: time="2024-12-13T01:11:39.250471040Z" level=info msg="StopPodSandbox for \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\" returns successfully" Dec 13 01:11:39.251078 containerd[1551]: time="2024-12-13T01:11:39.251016193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8948c8d7-gfxjh,Uid:f2a10eb6-3688-422c-b65b-01a7d73ed991,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:11:39.251259 systemd[1]: run-netns-cni\x2d56dc2a9c\x2dece0\x2df0b8\x2d8cde\x2da0ffc69704e3.mount: Deactivated successfully. Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.175 [INFO][4209] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.176 [INFO][4209] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" iface="eth0" netns="/var/run/netns/cni-af173062-815b-4f99-f819-860ec93f843c" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.176 [INFO][4209] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" iface="eth0" netns="/var/run/netns/cni-af173062-815b-4f99-f819-860ec93f843c" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4209] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" iface="eth0" netns="/var/run/netns/cni-af173062-815b-4f99-f819-860ec93f843c" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4209] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.177 [INFO][4209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.226 [INFO][4234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.227 [INFO][4234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.242 [INFO][4234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.249 [WARNING][4234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.249 [INFO][4234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.255 [INFO][4234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:39.265746 containerd[1551]: 2024-12-13 01:11:39.258 [INFO][4209] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:39.266411 containerd[1551]: time="2024-12-13T01:11:39.265919322Z" level=info msg="TearDown network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\" successfully" Dec 13 01:11:39.266411 containerd[1551]: time="2024-12-13T01:11:39.265960439Z" level=info msg="StopPodSandbox for \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\" returns successfully" Dec 13 01:11:39.266540 kubelet[2731]: E1213 01:11:39.266459 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:39.267156 containerd[1551]: time="2024-12-13T01:11:39.266888301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x6v2k,Uid:28737369-3c2e-42da-8f56-354717ebe21b,Namespace:kube-system,Attempt:1,}" Dec 13 01:11:39.268543 systemd[1]: run-netns-cni\x2daf173062\x2d815b\x2d4f99\x2df819\x2d860ec93f843c.mount: Deactivated successfully. Dec 13 01:11:39.408943 systemd-networkd[1241]: cali3d2ca991923: Link UP Dec 13 01:11:39.409299 systemd-networkd[1241]: cali3d2ca991923: Gained carrier Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.333 [INFO][4258] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0 calico-kube-controllers-54c9b9587d- calico-system f6bdff36-140a-401a-9765-907c2bbf003f 919 0 2024-12-13 01:11:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54c9b9587d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54c9b9587d-vnvps eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3d2ca991923 [] []}} ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Namespace="calico-system" Pod="calico-kube-controllers-54c9b9587d-vnvps" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.334 [INFO][4258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Namespace="calico-system" Pod="calico-kube-controllers-54c9b9587d-vnvps" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.361 [INFO][4300] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" HandleID="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.377 [INFO][4300] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" HandleID="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003acde0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54c9b9587d-vnvps", "timestamp":"2024-12-13 01:11:39.361033172 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.377 [INFO][4300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.378 [INFO][4300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.378 [INFO][4300] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.379 [INFO][4300] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.386 [INFO][4300] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.390 [INFO][4300] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.391 [INFO][4300] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.393 [INFO][4300] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.393 [INFO][4300] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.394 [INFO][4300] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2 Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.397 [INFO][4300] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.401 [INFO][4300] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.401 [INFO][4300] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" host="localhost" Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.401 [INFO][4300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:39.422098 containerd[1551]: 2024-12-13 01:11:39.401 [INFO][4300] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" HandleID="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.423004 containerd[1551]: 2024-12-13 01:11:39.405 [INFO][4258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Namespace="calico-system" Pod="calico-kube-controllers-54c9b9587d-vnvps" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0", GenerateName:"calico-kube-controllers-54c9b9587d-", Namespace:"calico-system", SelfLink:"", UID:"f6bdff36-140a-401a-9765-907c2bbf003f", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c9b9587d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54c9b9587d-vnvps", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d2ca991923", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:39.423004 containerd[1551]: 2024-12-13 01:11:39.406 [INFO][4258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Namespace="calico-system" Pod="calico-kube-controllers-54c9b9587d-vnvps" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.423004 containerd[1551]: 2024-12-13 01:11:39.406 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d2ca991923 ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Namespace="calico-system" Pod="calico-kube-controllers-54c9b9587d-vnvps" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.423004 containerd[1551]: 2024-12-13 01:11:39.408 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Namespace="calico-system" Pod="calico-kube-controllers-54c9b9587d-vnvps" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.423004 containerd[1551]: 2024-12-13 01:11:39.408 [INFO][4258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Namespace="calico-system" Pod="calico-kube-controllers-54c9b9587d-vnvps" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0", GenerateName:"calico-kube-controllers-54c9b9587d-", Namespace:"calico-system", SelfLink:"", UID:"f6bdff36-140a-401a-9765-907c2bbf003f", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c9b9587d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2", Pod:"calico-kube-controllers-54c9b9587d-vnvps", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d2ca991923", MAC:"82:45:11:4e:3c:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:39.423004 containerd[1551]: 2024-12-13 01:11:39.419 [INFO][4258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Namespace="calico-system" Pod="calico-kube-controllers-54c9b9587d-vnvps" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:39.447429 systemd-networkd[1241]: cali7965917c119: Link UP Dec 13 01:11:39.448877 systemd-networkd[1241]: cali7965917c119: Gained carrier Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.335 [INFO][4274] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--x6v2k-eth0 coredns-76f75df574- kube-system 28737369-3c2e-42da-8f56-354717ebe21b 918 0 2024-12-13 01:11:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-x6v2k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7965917c119 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Namespace="kube-system" Pod="coredns-76f75df574-x6v2k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x6v2k-" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.335 [INFO][4274] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Namespace="kube-system" Pod="coredns-76f75df574-x6v2k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.377 [INFO][4306] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" HandleID="k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.386 [INFO][4306] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" HandleID="k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-x6v2k", "timestamp":"2024-12-13 01:11:39.377745076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.386 [INFO][4306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.401 [INFO][4306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.401 [INFO][4306] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.403 [INFO][4306] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.408 [INFO][4306] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.412 [INFO][4306] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.413 [INFO][4306] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.418 [INFO][4306] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.418 [INFO][4306] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.420 [INFO][4306] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.430 [INFO][4306] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.436 [INFO][4306] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.436 [INFO][4306] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" host="localhost" Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.436 [INFO][4306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:39.466532 containerd[1551]: 2024-12-13 01:11:39.436 [INFO][4306] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" HandleID="k8s-pod-network.d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.467772 containerd[1551]: 2024-12-13 01:11:39.442 [INFO][4274] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Namespace="kube-system" Pod="coredns-76f75df574-x6v2k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x6v2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--x6v2k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"28737369-3c2e-42da-8f56-354717ebe21b", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-x6v2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7965917c119", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:39.467772 containerd[1551]: 2024-12-13 01:11:39.442 [INFO][4274] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Namespace="kube-system" Pod="coredns-76f75df574-x6v2k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.467772 containerd[1551]: 2024-12-13 01:11:39.443 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7965917c119 ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Namespace="kube-system" Pod="coredns-76f75df574-x6v2k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.467772 containerd[1551]: 2024-12-13 01:11:39.450 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Namespace="kube-system" Pod="coredns-76f75df574-x6v2k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.467772 containerd[1551]: 2024-12-13 01:11:39.450 [INFO][4274] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Namespace="kube-system" Pod="coredns-76f75df574-x6v2k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x6v2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--x6v2k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"28737369-3c2e-42da-8f56-354717ebe21b", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c", Pod:"coredns-76f75df574-x6v2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7965917c119", MAC:"12:cb:2d:42:45:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:39.467772 containerd[1551]: 2024-12-13 01:11:39.462 [INFO][4274] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c" Namespace="kube-system" Pod="coredns-76f75df574-x6v2k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:39.474288 containerd[1551]: time="2024-12-13T01:11:39.473803148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:39.474288 containerd[1551]: time="2024-12-13T01:11:39.473919537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:39.474288 containerd[1551]: time="2024-12-13T01:11:39.473938543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:39.475660 containerd[1551]: time="2024-12-13T01:11:39.475535299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:39.481271 systemd-networkd[1241]: calib6ed1627216: Link UP Dec 13 01:11:39.483045 systemd-networkd[1241]: calib6ed1627216: Gained carrier Dec 13 01:11:39.506634 containerd[1551]: time="2024-12-13T01:11:39.506535166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:39.507383 containerd[1551]: time="2024-12-13T01:11:39.507187009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:39.507383 containerd[1551]: time="2024-12-13T01:11:39.507313718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:39.507621 containerd[1551]: time="2024-12-13T01:11:39.507508383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:39.507775 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:11:39.537140 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:11:39.547610 containerd[1551]: time="2024-12-13T01:11:39.547571830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c9b9587d-vnvps,Uid:f6bdff36-140a-401a-9765-907c2bbf003f,Namespace:calico-system,Attempt:1,} returns sandbox id \"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2\"" Dec 13 01:11:39.548929 containerd[1551]: time="2024-12-13T01:11:39.548900283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:11:39.566599 containerd[1551]: time="2024-12-13T01:11:39.566560636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x6v2k,Uid:28737369-3c2e-42da-8f56-354717ebe21b,Namespace:kube-system,Attempt:1,} returns sandbox id \"d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c\"" Dec 13 01:11:39.567175 kubelet[2731]: E1213 01:11:39.567154 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:39.568814 containerd[1551]: time="2024-12-13T01:11:39.568790621Z" level=info msg="CreateContainer within sandbox \"d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.334 [INFO][4269] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0 calico-apiserver-7f8948c8d7- calico-apiserver f2a10eb6-3688-422c-b65b-01a7d73ed991 917 0 2024-12-13 01:11:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f8948c8d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f8948c8d7-gfxjh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6ed1627216 [] []}} ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-gfxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.334 [INFO][4269] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-gfxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.379 [INFO][4301] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" HandleID="k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.388 [INFO][4301] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" HandleID="k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050e60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f8948c8d7-gfxjh", "timestamp":"2024-12-13 01:11:39.379349376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.388 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.436 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.436 [INFO][4301] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.438 [INFO][4301] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.442 [INFO][4301] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.447 [INFO][4301] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.449 [INFO][4301] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.452 [INFO][4301] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.452 [INFO][4301] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.454 [INFO][4301] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.461 [INFO][4301] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.470 [INFO][4301] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.470 [INFO][4301] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" host="localhost" Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.470 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:39.584227 containerd[1551]: 2024-12-13 01:11:39.470 [INFO][4301] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" HandleID="k8s-pod-network.469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.585248 containerd[1551]: 2024-12-13 01:11:39.474 [INFO][4269] cni-plugin/k8s.go 386: Populated endpoint ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-gfxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0", GenerateName:"calico-apiserver-7f8948c8d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2a10eb6-3688-422c-b65b-01a7d73ed991", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8948c8d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f8948c8d7-gfxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6ed1627216", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:39.585248 containerd[1551]: 2024-12-13 01:11:39.474 [INFO][4269] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-gfxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.585248 containerd[1551]: 2024-12-13 01:11:39.475 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6ed1627216 ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-gfxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.585248 containerd[1551]: 2024-12-13 01:11:39.485 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-gfxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.585248 containerd[1551]: 2024-12-13 01:11:39.485 [INFO][4269] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-gfxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0", GenerateName:"calico-apiserver-7f8948c8d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2a10eb6-3688-422c-b65b-01a7d73ed991", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8948c8d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f", Pod:"calico-apiserver-7f8948c8d7-gfxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6ed1627216", MAC:"a6:c6:0a:cf:87:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:39.585248 containerd[1551]: 2024-12-13 01:11:39.580 [INFO][4269] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-gfxjh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:39.770174 containerd[1551]: time="2024-12-13T01:11:39.770033293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:39.770332 containerd[1551]: time="2024-12-13T01:11:39.770156675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:39.770332 containerd[1551]: time="2024-12-13T01:11:39.770186450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:39.770332 containerd[1551]: time="2024-12-13T01:11:39.770293221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:39.774145 kubelet[2731]: I1213 01:11:39.774105 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:11:39.775158 kubelet[2731]: E1213 01:11:39.775134 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:39.797719 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:11:39.826725 containerd[1551]: time="2024-12-13T01:11:39.826679401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8948c8d7-gfxjh,Uid:f2a10eb6-3688-422c-b65b-01a7d73ed991,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f\"" Dec 13 01:11:39.872163 containerd[1551]: time="2024-12-13T01:11:39.872115772Z" level=info msg="CreateContainer within sandbox \"d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fb7758181fb6949206e7649d23afd5852f64fc88873ce93a578ea91ea254175a\"" Dec 13 01:11:39.873582 containerd[1551]: time="2024-12-13T01:11:39.873519556Z" level=info msg="StartContainer for \"fb7758181fb6949206e7649d23afd5852f64fc88873ce93a578ea91ea254175a\"" Dec 13 01:11:39.938168 containerd[1551]: time="2024-12-13T01:11:39.938118439Z" level=info msg="StartContainer for \"fb7758181fb6949206e7649d23afd5852f64fc88873ce93a578ea91ea254175a\" returns successfully" Dec 13 01:11:40.033786 containerd[1551]: time="2024-12-13T01:11:40.033650481Z" level=info msg="StopPodSandbox for \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\"" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.079 [INFO][4594] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.079 [INFO][4594] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" iface="eth0" netns="/var/run/netns/cni-d621df11-663a-70d8-d24f-d79bacdf6ea9" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.079 [INFO][4594] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" iface="eth0" netns="/var/run/netns/cni-d621df11-663a-70d8-d24f-d79bacdf6ea9" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.079 [INFO][4594] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" iface="eth0" netns="/var/run/netns/cni-d621df11-663a-70d8-d24f-d79bacdf6ea9" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.079 [INFO][4594] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.079 [INFO][4594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.100 [INFO][4601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.100 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.100 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.105 [WARNING][4601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.105 [INFO][4601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.109 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:40.114186 containerd[1551]: 2024-12-13 01:11:40.112 [INFO][4594] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:40.114618 containerd[1551]: time="2024-12-13T01:11:40.114380440Z" level=info msg="TearDown network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\" successfully" Dec 13 01:11:40.114618 containerd[1551]: time="2024-12-13T01:11:40.114405748Z" level=info msg="StopPodSandbox for \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\" returns successfully" Dec 13 01:11:40.115163 containerd[1551]: time="2024-12-13T01:11:40.115137441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8948c8d7-fq2q4,Uid:ea139758-c514-433e-ba8c-28ff42ef6f58,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:11:40.176577 kubelet[2731]: E1213 01:11:40.176540 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:40.188242 kubelet[2731]: E1213 01:11:40.188198 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:40.191652 kubelet[2731]: I1213 01:11:40.191172 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-x6v2k" podStartSLOduration=29.191125281 podStartE2EDuration="29.191125281s" podCreationTimestamp="2024-12-13 01:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:11:40.190585428 +0000 UTC m=+42.244516640" watchObservedRunningTime="2024-12-13 01:11:40.191125281 +0000 UTC m=+42.245056462" Dec 13 01:11:40.257336 systemd[1]: run-netns-cni\x2dd621df11\x2d663a\x2d70d8\x2dd24f\x2dd79bacdf6ea9.mount: Deactivated successfully. Dec 13 01:11:40.269309 systemd-networkd[1241]: calie21320f1eca: Link UP Dec 13 01:11:40.270480 systemd-networkd[1241]: calie21320f1eca: Gained carrier Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.184 [INFO][4608] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0 calico-apiserver-7f8948c8d7- calico-apiserver ea139758-c514-433e-ba8c-28ff42ef6f58 946 0 2024-12-13 01:11:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f8948c8d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f8948c8d7-fq2q4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie21320f1eca [] []}} ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-fq2q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.184 [INFO][4608] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-fq2q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.227 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" HandleID="k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.235 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" HandleID="k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f8948c8d7-fq2q4", "timestamp":"2024-12-13 01:11:40.227513628 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.235 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.235 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.235 [INFO][4622] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.237 [INFO][4622] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.241 [INFO][4622] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.246 [INFO][4622] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.248 [INFO][4622] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.251 [INFO][4622] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.251 [INFO][4622] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.255 [INFO][4622] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.259 [INFO][4622] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.263 [INFO][4622] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.263 [INFO][4622] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" host="localhost" Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.263 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:40.281343 containerd[1551]: 2024-12-13 01:11:40.263 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" HandleID="k8s-pod-network.58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.282122 containerd[1551]: 2024-12-13 01:11:40.266 [INFO][4608] cni-plugin/k8s.go 386: Populated endpoint ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-fq2q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0", GenerateName:"calico-apiserver-7f8948c8d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea139758-c514-433e-ba8c-28ff42ef6f58", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8948c8d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f8948c8d7-fq2q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie21320f1eca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:40.282122 containerd[1551]: 2024-12-13 01:11:40.266 [INFO][4608] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-fq2q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.282122 containerd[1551]: 2024-12-13 01:11:40.266 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie21320f1eca ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-fq2q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.282122 containerd[1551]: 2024-12-13 01:11:40.268 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-fq2q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.282122 containerd[1551]: 2024-12-13 01:11:40.268 [INFO][4608] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-fq2q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0", GenerateName:"calico-apiserver-7f8948c8d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea139758-c514-433e-ba8c-28ff42ef6f58", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8948c8d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff", Pod:"calico-apiserver-7f8948c8d7-fq2q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie21320f1eca", MAC:"0e:21:2b:a7:02:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:40.282122 containerd[1551]: 2024-12-13 01:11:40.278 [INFO][4608] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff" Namespace="calico-apiserver" Pod="calico-apiserver-7f8948c8d7-fq2q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:40.303917 containerd[1551]: time="2024-12-13T01:11:40.303739395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:40.303917 containerd[1551]: time="2024-12-13T01:11:40.303819846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:40.303917 containerd[1551]: time="2024-12-13T01:11:40.303836648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:40.304078 containerd[1551]: time="2024-12-13T01:11:40.303950391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:40.327685 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:11:40.356454 containerd[1551]: time="2024-12-13T01:11:40.356403195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8948c8d7-fq2q4,Uid:ea139758-c514-433e-ba8c-28ff42ef6f58,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff\"" Dec 13 01:11:40.521551 systemd-networkd[1241]: cali3d2ca991923: Gained IPv6LL Dec 13 01:11:40.841598 systemd-networkd[1241]: cali7965917c119: Gained IPv6LL Dec 13 01:11:41.097561 systemd-networkd[1241]: calib6ed1627216: Gained IPv6LL Dec 13 01:11:41.191685 kubelet[2731]: E1213 01:11:41.191641 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:41.772826 containerd[1551]: time="2024-12-13T01:11:41.772771575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:41.773493 containerd[1551]: time="2024-12-13T01:11:41.773437745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:11:41.774577 containerd[1551]: time="2024-12-13T01:11:41.774536567Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:41.776605 containerd[1551]: time="2024-12-13T01:11:41.776568921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:41.777110 containerd[1551]: time="2024-12-13T01:11:41.777062457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.228134401s" Dec 13 01:11:41.777110 containerd[1551]: time="2024-12-13T01:11:41.777106359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:11:41.778428 containerd[1551]: time="2024-12-13T01:11:41.778401550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:11:41.785153 containerd[1551]: time="2024-12-13T01:11:41.785109798Z" level=info msg="CreateContainer within sandbox \"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:11:41.799749 containerd[1551]: time="2024-12-13T01:11:41.799699626Z" level=info msg="CreateContainer within sandbox \"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\"" Dec 13 01:11:41.800698 containerd[1551]: time="2024-12-13T01:11:41.800648106Z" level=info msg="StartContainer for \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\"" Dec 13 01:11:41.865592 systemd-networkd[1241]: calie21320f1eca: Gained IPv6LL Dec 13 01:11:41.875022 containerd[1551]: time="2024-12-13T01:11:41.874988529Z" level=info msg="StartContainer for \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\" returns successfully" Dec 13 01:11:42.033248 containerd[1551]: time="2024-12-13T01:11:42.033088878Z" level=info msg="StopPodSandbox for \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\"" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.077 [INFO][4747] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.077 [INFO][4747] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" iface="eth0" netns="/var/run/netns/cni-5e598cf9-375c-cbb9-c14e-039bdd857ae9" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.077 [INFO][4747] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" iface="eth0" netns="/var/run/netns/cni-5e598cf9-375c-cbb9-c14e-039bdd857ae9" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.078 [INFO][4747] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" iface="eth0" netns="/var/run/netns/cni-5e598cf9-375c-cbb9-c14e-039bdd857ae9" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.078 [INFO][4747] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.078 [INFO][4747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.100 [INFO][4755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.101 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.101 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.106 [WARNING][4755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.106 [INFO][4755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.107 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:42.112566 containerd[1551]: 2024-12-13 01:11:42.110 [INFO][4747] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:42.112972 containerd[1551]: time="2024-12-13T01:11:42.112770758Z" level=info msg="TearDown network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\" successfully" Dec 13 01:11:42.112972 containerd[1551]: time="2024-12-13T01:11:42.112797538Z" level=info msg="StopPodSandbox for \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\" returns successfully" Dec 13 01:11:42.113145 kubelet[2731]: E1213 01:11:42.113112 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:42.113524 containerd[1551]: time="2024-12-13T01:11:42.113499706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8lhww,Uid:4fa073bd-a2a4-4b4c-83a5-815ebeb43007,Namespace:kube-system,Attempt:1,}" Dec 13 01:11:42.195399 kubelet[2731]: E1213 01:11:42.195333 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:42.299531 kubelet[2731]: I1213 01:11:42.298886 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54c9b9587d-vnvps" podStartSLOduration=23.069982482 podStartE2EDuration="25.298848737s" podCreationTimestamp="2024-12-13 01:11:17 +0000 UTC" firstStartedPulling="2024-12-13 01:11:39.548535719 +0000 UTC m=+41.602466900" lastFinishedPulling="2024-12-13 01:11:41.777401974 +0000 UTC m=+43.831333155" observedRunningTime="2024-12-13 01:11:42.298682004 +0000 UTC m=+44.352613185" watchObservedRunningTime="2024-12-13 01:11:42.298848737 +0000 UTC m=+44.352779918" Dec 13 01:11:42.394707 systemd-networkd[1241]: cali696f5b22eee: Link UP Dec 13 01:11:42.395145 systemd-networkd[1241]: cali696f5b22eee: Gained carrier Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.152 [INFO][4763] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--8lhww-eth0 coredns-76f75df574- kube-system 4fa073bd-a2a4-4b4c-83a5-815ebeb43007 975 0 2024-12-13 01:11:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-8lhww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali696f5b22eee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Namespace="kube-system" Pod="coredns-76f75df574-8lhww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--8lhww-" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.153 [INFO][4763] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Namespace="kube-system" Pod="coredns-76f75df574-8lhww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.181 [INFO][4776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" HandleID="k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.188 [INFO][4776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" HandleID="k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e2e20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-8lhww", "timestamp":"2024-12-13 01:11:42.181855223 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.188 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.189 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.189 [INFO][4776] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.190 [INFO][4776] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.193 [INFO][4776] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.198 [INFO][4776] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.261 [INFO][4776] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.299 [INFO][4776] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.299 [INFO][4776] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.301 [INFO][4776] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.372 [INFO][4776] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.388 [INFO][4776] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.389 [INFO][4776] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" host="localhost" Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.389 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:42.407458 containerd[1551]: 2024-12-13 01:11:42.389 [INFO][4776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" HandleID="k8s-pod-network.89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.408055 containerd[1551]: 2024-12-13 01:11:42.392 [INFO][4763] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Namespace="kube-system" Pod="coredns-76f75df574-8lhww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--8lhww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--8lhww-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4fa073bd-a2a4-4b4c-83a5-815ebeb43007", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-8lhww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali696f5b22eee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:42.408055 containerd[1551]: 2024-12-13 01:11:42.392 [INFO][4763] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Namespace="kube-system" Pod="coredns-76f75df574-8lhww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.408055 containerd[1551]: 2024-12-13 01:11:42.393 [INFO][4763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali696f5b22eee ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Namespace="kube-system" Pod="coredns-76f75df574-8lhww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.408055 containerd[1551]: 2024-12-13 01:11:42.394 [INFO][4763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Namespace="kube-system" Pod="coredns-76f75df574-8lhww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.408055 containerd[1551]: 2024-12-13 01:11:42.395 [INFO][4763] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Namespace="kube-system" Pod="coredns-76f75df574-8lhww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--8lhww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--8lhww-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4fa073bd-a2a4-4b4c-83a5-815ebeb43007", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c", Pod:"coredns-76f75df574-8lhww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali696f5b22eee", MAC:"1e:39:78:58:a2:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:42.408055 containerd[1551]: 2024-12-13 01:11:42.404 [INFO][4763] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c" Namespace="kube-system" Pod="coredns-76f75df574-8lhww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:42.428730 containerd[1551]: time="2024-12-13T01:11:42.428573240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:42.428730 containerd[1551]: time="2024-12-13T01:11:42.428627242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:42.428730 containerd[1551]: time="2024-12-13T01:11:42.428650165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:42.428918 containerd[1551]: time="2024-12-13T01:11:42.428862052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:42.459733 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:11:42.486510 containerd[1551]: time="2024-12-13T01:11:42.486466485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8lhww,Uid:4fa073bd-a2a4-4b4c-83a5-815ebeb43007,Namespace:kube-system,Attempt:1,} returns sandbox id \"89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c\"" Dec 13 01:11:42.487101 kubelet[2731]: E1213 01:11:42.487080 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:42.489901 containerd[1551]: time="2024-12-13T01:11:42.489795883Z" level=info msg="CreateContainer within sandbox \"89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:11:42.594215 containerd[1551]: time="2024-12-13T01:11:42.594026107Z" level=info msg="CreateContainer within sandbox \"89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0711fc46eb29f5b3532a652c64935b797093cbddb6a705fd9bdd55d5f39b30f\"" Dec 13 01:11:42.594723 containerd[1551]: time="2024-12-13T01:11:42.594677930Z" level=info msg="StartContainer for \"e0711fc46eb29f5b3532a652c64935b797093cbddb6a705fd9bdd55d5f39b30f\"" Dec 13 01:11:42.739786 containerd[1551]: time="2024-12-13T01:11:42.739725156Z" level=info msg="StartContainer for \"e0711fc46eb29f5b3532a652c64935b797093cbddb6a705fd9bdd55d5f39b30f\" returns successfully" Dec 13 01:11:42.786001 systemd[1]: run-netns-cni\x2d5e598cf9\x2d375c\x2dcbb9\x2dc14e\x2d039bdd857ae9.mount: Deactivated successfully. Dec 13 01:11:42.907575 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:52756.service - OpenSSH per-connection server daemon (10.0.0.1:52756). Dec 13 01:11:42.939473 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 52756 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:42.941035 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:42.944898 systemd-logind[1532]: New session 11 of user core. Dec 13 01:11:42.952618 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:11:43.033155 containerd[1551]: time="2024-12-13T01:11:43.033102096Z" level=info msg="StopPodSandbox for \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\"" Dec 13 01:11:43.091495 sshd[4880]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:43.105400 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:52758.service - OpenSSH per-connection server daemon (10.0.0.1:52758). Dec 13 01:11:43.106096 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:52756.service: Deactivated successfully. Dec 13 01:11:43.110461 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:11:43.112772 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:11:43.114718 systemd-logind[1532]: Removed session 11. Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.075 [INFO][4909] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.076 [INFO][4909] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" iface="eth0" netns="/var/run/netns/cni-67f6492b-f860-3d35-f084-7807d89b01ff" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.076 [INFO][4909] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" iface="eth0" netns="/var/run/netns/cni-67f6492b-f860-3d35-f084-7807d89b01ff" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.076 [INFO][4909] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" iface="eth0" netns="/var/run/netns/cni-67f6492b-f860-3d35-f084-7807d89b01ff" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.076 [INFO][4909] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.076 [INFO][4909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.108 [INFO][4917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.108 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.108 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.113 [WARNING][4917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.113 [INFO][4917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.114 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:43.121029 containerd[1551]: 2024-12-13 01:11:43.118 [INFO][4909] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:43.121830 containerd[1551]: time="2024-12-13T01:11:43.121209810Z" level=info msg="TearDown network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\" successfully" Dec 13 01:11:43.121830 containerd[1551]: time="2024-12-13T01:11:43.121244155Z" level=info msg="StopPodSandbox for \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\" returns successfully" Dec 13 01:11:43.122542 containerd[1551]: time="2024-12-13T01:11:43.122501053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bws6x,Uid:22974a96-e85c-4133-adca-45fb1d4311f1,Namespace:calico-system,Attempt:1,}" Dec 13 01:11:43.126143 systemd[1]: run-netns-cni\x2d67f6492b\x2df860\x2d3d35\x2df084\x2d7807d89b01ff.mount: Deactivated successfully. Dec 13 01:11:43.139882 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 52758 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:43.142004 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:43.154507 systemd-logind[1532]: New session 12 of user core. Dec 13 01:11:43.161684 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:11:43.198128 kubelet[2731]: I1213 01:11:43.198099 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:11:43.198984 kubelet[2731]: E1213 01:11:43.198967 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:43.215529 kubelet[2731]: I1213 01:11:43.215269 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8lhww" podStartSLOduration=32.215229618 podStartE2EDuration="32.215229618s" podCreationTimestamp="2024-12-13 01:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:11:43.207920444 +0000 UTC m=+45.261851625" watchObservedRunningTime="2024-12-13 01:11:43.215229618 +0000 UTC m=+45.269160799" Dec 13 01:11:43.273007 systemd-networkd[1241]: cali5b583b91d80: Link UP Dec 13 01:11:43.273981 systemd-networkd[1241]: cali5b583b91d80: Gained carrier Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.176 [INFO][4930] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bws6x-eth0 csi-node-driver- calico-system 22974a96-e85c-4133-adca-45fb1d4311f1 988 0 2024-12-13 01:11:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bws6x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5b583b91d80 [] []}} ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Namespace="calico-system" Pod="csi-node-driver-bws6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--bws6x-" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.176 [INFO][4930] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Namespace="calico-system" Pod="csi-node-driver-bws6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.207 [INFO][4945] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" HandleID="k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.220 [INFO][4945] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" HandleID="k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003751d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bws6x", "timestamp":"2024-12-13 01:11:43.207029312 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.220 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.220 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.220 [INFO][4945] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.225 [INFO][4945] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.232 [INFO][4945] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.243 [INFO][4945] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.245 [INFO][4945] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.248 [INFO][4945] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.248 [INFO][4945] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.249 [INFO][4945] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753 Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.252 [INFO][4945] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.264 [INFO][4945] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.264 [INFO][4945] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" host="localhost" Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.264 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:43.292838 containerd[1551]: 2024-12-13 01:11:43.264 [INFO][4945] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" HandleID="k8s-pod-network.c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.293627 containerd[1551]: 2024-12-13 01:11:43.268 [INFO][4930] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Namespace="calico-system" Pod="csi-node-driver-bws6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--bws6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bws6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22974a96-e85c-4133-adca-45fb1d4311f1", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bws6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b583b91d80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:43.293627 containerd[1551]: 2024-12-13 01:11:43.269 [INFO][4930] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Namespace="calico-system" Pod="csi-node-driver-bws6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.293627 containerd[1551]: 2024-12-13 01:11:43.269 [INFO][4930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b583b91d80 ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Namespace="calico-system" Pod="csi-node-driver-bws6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.293627 containerd[1551]: 2024-12-13 01:11:43.274 [INFO][4930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Namespace="calico-system" Pod="csi-node-driver-bws6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.293627 containerd[1551]: 2024-12-13 01:11:43.274 [INFO][4930] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Namespace="calico-system" Pod="csi-node-driver-bws6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--bws6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bws6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22974a96-e85c-4133-adca-45fb1d4311f1", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753", Pod:"csi-node-driver-bws6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b583b91d80", MAC:"5e:00:71:09:57:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:43.293627 containerd[1551]: 2024-12-13 01:11:43.288 [INFO][4930] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753" Namespace="calico-system" Pod="csi-node-driver-bws6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:43.539546 sshd[4925]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:43.547439 containerd[1551]: time="2024-12-13T01:11:43.547288382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:11:43.549141 containerd[1551]: time="2024-12-13T01:11:43.547362592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:11:43.549141 containerd[1551]: time="2024-12-13T01:11:43.547478469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:43.549141 containerd[1551]: time="2024-12-13T01:11:43.547689194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:11:43.555752 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:52760.service - OpenSSH per-connection server daemon (10.0.0.1:52760). Dec 13 01:11:43.556453 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:52758.service: Deactivated successfully. Dec 13 01:11:43.568628 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:11:43.570751 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:11:43.572683 systemd-logind[1532]: Removed session 12. Dec 13 01:11:43.596562 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 52760 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:43.598782 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:43.599232 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:11:43.604245 systemd-logind[1532]: New session 13 of user core. Dec 13 01:11:43.609771 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:11:43.614229 containerd[1551]: time="2024-12-13T01:11:43.613858494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bws6x,Uid:22974a96-e85c-4133-adca-45fb1d4311f1,Namespace:calico-system,Attempt:1,} returns sandbox id \"c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753\"" Dec 13 01:11:43.739937 sshd[4989]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:43.744472 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:52760.service: Deactivated successfully. Dec 13 01:11:43.750255 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:11:43.751626 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:11:43.752713 systemd-logind[1532]: Removed session 13. Dec 13 01:11:44.041578 systemd-networkd[1241]: cali696f5b22eee: Gained IPv6LL Dec 13 01:11:44.206151 kubelet[2731]: E1213 01:11:44.206107 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:44.989352 containerd[1551]: time="2024-12-13T01:11:44.989281648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:44.991303 containerd[1551]: time="2024-12-13T01:11:44.991238039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:11:44.992736 containerd[1551]: time="2024-12-13T01:11:44.992703128Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:44.995599 containerd[1551]: time="2024-12-13T01:11:44.995564547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:44.996618 containerd[1551]: time="2024-12-13T01:11:44.996585483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.218147044s" Dec 13 01:11:44.996686 containerd[1551]: time="2024-12-13T01:11:44.996620619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:11:44.997552 containerd[1551]: time="2024-12-13T01:11:44.997514155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:11:44.998891 containerd[1551]: time="2024-12-13T01:11:44.998841786Z" level=info msg="CreateContainer within sandbox \"469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:11:45.013327 containerd[1551]: time="2024-12-13T01:11:45.013269647Z" level=info msg="CreateContainer within sandbox \"469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2fd8743d1e851dcf7c29c7dff1b18d86ff0980c5f0b28b0e1732c692fb4ee76a\"" Dec 13 01:11:45.014547 containerd[1551]: time="2024-12-13T01:11:45.014345746Z" level=info msg="StartContainer for \"2fd8743d1e851dcf7c29c7dff1b18d86ff0980c5f0b28b0e1732c692fb4ee76a\"" Dec 13 01:11:45.257571 systemd-networkd[1241]: cali5b583b91d80: Gained IPv6LL Dec 13 01:11:45.387853 containerd[1551]: time="2024-12-13T01:11:45.387721419Z" level=info msg="StartContainer for \"2fd8743d1e851dcf7c29c7dff1b18d86ff0980c5f0b28b0e1732c692fb4ee76a\" returns successfully" Dec 13 01:11:45.390765 kubelet[2731]: E1213 01:11:45.390742 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:11:45.820970 containerd[1551]: time="2024-12-13T01:11:45.820923824Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:45.855834 containerd[1551]: time="2024-12-13T01:11:45.855758503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:11:45.858120 containerd[1551]: time="2024-12-13T01:11:45.858093183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 860.549613ms" Dec 13 01:11:45.858186 containerd[1551]: time="2024-12-13T01:11:45.858121096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:11:45.858678 containerd[1551]: time="2024-12-13T01:11:45.858637825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:11:45.859955 containerd[1551]: time="2024-12-13T01:11:45.859915683Z" level=info msg="CreateContainer within sandbox \"58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:11:46.160358 containerd[1551]: time="2024-12-13T01:11:46.160227580Z" level=info msg="CreateContainer within sandbox \"58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f6235871bdce60beced3ed15a83b8baf62409b00fccf29365eb9499e49b67b39\"" Dec 13 01:11:46.160875 containerd[1551]: time="2024-12-13T01:11:46.160834580Z" level=info msg="StartContainer for \"f6235871bdce60beced3ed15a83b8baf62409b00fccf29365eb9499e49b67b39\"" Dec 13 01:11:46.308973 containerd[1551]: time="2024-12-13T01:11:46.308907445Z" level=info msg="StartContainer for \"f6235871bdce60beced3ed15a83b8baf62409b00fccf29365eb9499e49b67b39\" returns successfully" Dec 13 01:11:46.451502 kubelet[2731]: I1213 01:11:46.451468 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f8948c8d7-gfxjh" podStartSLOduration=24.282236425 podStartE2EDuration="29.451420531s" podCreationTimestamp="2024-12-13 01:11:17 +0000 UTC" firstStartedPulling="2024-12-13 01:11:39.827783834 +0000 UTC m=+41.881715015" lastFinishedPulling="2024-12-13 01:11:44.99696794 +0000 UTC m=+47.050899121" observedRunningTime="2024-12-13 01:11:46.437720376 +0000 UTC m=+48.491651557" watchObservedRunningTime="2024-12-13 01:11:46.451420531 +0000 UTC m=+48.505351712" Dec 13 01:11:47.314259 containerd[1551]: time="2024-12-13T01:11:47.314185494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:47.314986 containerd[1551]: time="2024-12-13T01:11:47.314902008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:11:47.316100 containerd[1551]: time="2024-12-13T01:11:47.316066012Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:47.318433 containerd[1551]: time="2024-12-13T01:11:47.318394962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:47.318917 containerd[1551]: time="2024-12-13T01:11:47.318884770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.46021219s" Dec 13 01:11:47.318963 containerd[1551]: time="2024-12-13T01:11:47.318917372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:11:47.320851 containerd[1551]: time="2024-12-13T01:11:47.320815021Z" level=info msg="CreateContainer within sandbox \"c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:11:47.340587 containerd[1551]: time="2024-12-13T01:11:47.340540950Z" level=info msg="CreateContainer within sandbox \"c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"83990d5cbf818ffc87e38a743182613c38840cf8f646c2b26aab7fa690d017a8\"" Dec 13 01:11:47.341129 containerd[1551]: time="2024-12-13T01:11:47.340965828Z" level=info msg="StartContainer for \"83990d5cbf818ffc87e38a743182613c38840cf8f646c2b26aab7fa690d017a8\"" Dec 13 01:11:47.399313 containerd[1551]: time="2024-12-13T01:11:47.399276068Z" level=info msg="StartContainer for \"83990d5cbf818ffc87e38a743182613c38840cf8f646c2b26aab7fa690d017a8\" returns successfully" Dec 13 01:11:47.400159 kubelet[2731]: I1213 01:11:47.400132 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:11:47.400329 kubelet[2731]: I1213 01:11:47.400231 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:11:47.400617 containerd[1551]: time="2024-12-13T01:11:47.400595934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:11:48.764929 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:56258.service - OpenSSH per-connection server daemon (10.0.0.1:56258). Dec 13 01:11:48.813615 sshd[5175]: Accepted publickey for core from 10.0.0.1 port 56258 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:48.815550 sshd[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:48.820620 systemd-logind[1532]: New session 14 of user core. Dec 13 01:11:48.825681 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:11:48.963163 sshd[5175]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:48.963381 containerd[1551]: time="2024-12-13T01:11:48.963276033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:48.964646 containerd[1551]: time="2024-12-13T01:11:48.964594085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:11:48.965836 containerd[1551]: time="2024-12-13T01:11:48.965802984Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:48.968039 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:56258.service: Deactivated successfully. Dec 13 01:11:48.969272 containerd[1551]: time="2024-12-13T01:11:48.968699838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:48.969272 containerd[1551]: time="2024-12-13T01:11:48.969173176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.56854927s" Dec 13 01:11:48.969272 containerd[1551]: time="2024-12-13T01:11:48.969197732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:11:48.971049 containerd[1551]: time="2024-12-13T01:11:48.971027966Z" level=info msg="CreateContainer within sandbox \"c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:11:48.972881 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:11:48.973887 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:11:48.974923 systemd-logind[1532]: Removed session 14. Dec 13 01:11:48.988134 containerd[1551]: time="2024-12-13T01:11:48.988075339Z" level=info msg="CreateContainer within sandbox \"c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6a1fd2278c363bbc20c4b1cbc0236638636a482ec9348618c1cd8db93a0d36f4\"" Dec 13 01:11:48.988603 containerd[1551]: time="2024-12-13T01:11:48.988580987Z" level=info msg="StartContainer for \"6a1fd2278c363bbc20c4b1cbc0236638636a482ec9348618c1cd8db93a0d36f4\"" Dec 13 01:11:49.058862 containerd[1551]: time="2024-12-13T01:11:49.058739269Z" level=info msg="StartContainer for \"6a1fd2278c363bbc20c4b1cbc0236638636a482ec9348618c1cd8db93a0d36f4\" returns successfully" Dec 13 01:11:49.113051 kubelet[2731]: I1213 01:11:49.113022 2731 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:11:49.114220 kubelet[2731]: I1213 01:11:49.114200 2731 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:11:49.423683 kubelet[2731]: I1213 01:11:49.423266 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f8948c8d7-fq2q4" podStartSLOduration=26.922294142 podStartE2EDuration="32.423222372s" podCreationTimestamp="2024-12-13 01:11:17 +0000 UTC" firstStartedPulling="2024-12-13 01:11:40.35746148 +0000 UTC m=+42.411392661" lastFinishedPulling="2024-12-13 01:11:45.85838971 +0000 UTC m=+47.912320891" observedRunningTime="2024-12-13 01:11:46.455228174 +0000 UTC m=+48.509159355" watchObservedRunningTime="2024-12-13 01:11:49.423222372 +0000 UTC m=+51.477153553" Dec 13 01:11:49.463895 kubelet[2731]: I1213 01:11:49.463850 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:11:49.498311 kubelet[2731]: I1213 01:11:49.497657 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:11:49.516901 kubelet[2731]: I1213 01:11:49.516850 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-bws6x" podStartSLOduration=27.162515008 podStartE2EDuration="32.516804397s" podCreationTimestamp="2024-12-13 01:11:17 +0000 UTC" firstStartedPulling="2024-12-13 01:11:43.61513025 +0000 UTC m=+45.669061431" lastFinishedPulling="2024-12-13 01:11:48.969419639 +0000 UTC m=+51.023350820" observedRunningTime="2024-12-13 01:11:49.422987641 +0000 UTC m=+51.476918822" watchObservedRunningTime="2024-12-13 01:11:49.516804397 +0000 UTC m=+51.570735578" Dec 13 01:11:53.974589 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:56274.service - OpenSSH per-connection server daemon (10.0.0.1:56274). Dec 13 01:11:54.005461 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 56274 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:54.007019 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:54.011598 systemd-logind[1532]: New session 15 of user core. Dec 13 01:11:54.023717 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:11:54.148308 sshd[5276]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:54.152747 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:56274.service: Deactivated successfully. Dec 13 01:11:54.155158 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:11:54.155170 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:11:54.156579 systemd-logind[1532]: Removed session 15. Dec 13 01:11:58.026074 containerd[1551]: time="2024-12-13T01:11:58.026025507Z" level=info msg="StopPodSandbox for \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\"" Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.059 [WARNING][5312] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--8lhww-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4fa073bd-a2a4-4b4c-83a5-815ebeb43007", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c", Pod:"coredns-76f75df574-8lhww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali696f5b22eee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.059 [INFO][5312] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.059 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" iface="eth0" netns="" Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.059 [INFO][5312] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.059 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.082 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.082 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.082 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.088 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.088 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.089 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.095717 containerd[1551]: 2024-12-13 01:11:58.092 [INFO][5312] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:58.096405 containerd[1551]: time="2024-12-13T01:11:58.095763364Z" level=info msg="TearDown network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\" successfully" Dec 13 01:11:58.096405 containerd[1551]: time="2024-12-13T01:11:58.095797509Z" level=info msg="StopPodSandbox for \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\" returns successfully" Dec 13 01:11:58.096475 containerd[1551]: time="2024-12-13T01:11:58.096411027Z" level=info msg="RemovePodSandbox for \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\"" Dec 13 01:11:58.098769 containerd[1551]: time="2024-12-13T01:11:58.098727702Z" level=info msg="Forcibly stopping sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\"" Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.134 [WARNING][5343] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--8lhww-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4fa073bd-a2a4-4b4c-83a5-815ebeb43007", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89ecba1e8646134e49c227711f0a7838a6c109660dba4ef0b2ec77b6109b1c4c", Pod:"coredns-76f75df574-8lhww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali696f5b22eee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.134 [INFO][5343] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.134 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" iface="eth0" netns="" Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.134 [INFO][5343] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.134 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.156 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.156 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.156 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.161 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.161 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" HandleID="k8s-pod-network.2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Workload="localhost-k8s-coredns--76f75df574--8lhww-eth0" Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.163 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.168409 containerd[1551]: 2024-12-13 01:11:58.165 [INFO][5343] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d" Dec 13 01:11:58.168892 containerd[1551]: time="2024-12-13T01:11:58.168445589Z" level=info msg="TearDown network for sandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\" successfully" Dec 13 01:11:58.258781 containerd[1551]: time="2024-12-13T01:11:58.258730497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:11:58.258914 containerd[1551]: time="2024-12-13T01:11:58.258797495Z" level=info msg="RemovePodSandbox \"2c20f7d352a0fc1c2ae04c9735136e616f0a943beea8fd013ce32c2900586c8d\" returns successfully" Dec 13 01:11:58.259246 containerd[1551]: time="2024-12-13T01:11:58.259220531Z" level=info msg="StopPodSandbox for \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\"" Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.295 [WARNING][5372] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0", GenerateName:"calico-apiserver-7f8948c8d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea139758-c514-433e-ba8c-28ff42ef6f58", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8948c8d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff", Pod:"calico-apiserver-7f8948c8d7-fq2q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie21320f1eca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.295 [INFO][5372] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.295 [INFO][5372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" iface="eth0" netns="" Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.295 [INFO][5372] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.295 [INFO][5372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.317 [INFO][5379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.317 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.317 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.323 [WARNING][5379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.323 [INFO][5379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.324 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.330421 containerd[1551]: 2024-12-13 01:11:58.327 [INFO][5372] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:58.330421 containerd[1551]: time="2024-12-13T01:11:58.330360808Z" level=info msg="TearDown network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\" successfully" Dec 13 01:11:58.330421 containerd[1551]: time="2024-12-13T01:11:58.330403651Z" level=info msg="StopPodSandbox for \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\" returns successfully" Dec 13 01:11:58.331131 containerd[1551]: time="2024-12-13T01:11:58.330942477Z" level=info msg="RemovePodSandbox for \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\"" Dec 13 01:11:58.331131 containerd[1551]: time="2024-12-13T01:11:58.330970351Z" level=info msg="Forcibly stopping sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\"" Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.367 [WARNING][5401] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0", GenerateName:"calico-apiserver-7f8948c8d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea139758-c514-433e-ba8c-28ff42ef6f58", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8948c8d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58098a3618f15a7d23892124ebcb7fda7c43d150c11a5f4f91fa48cd4e899eff", Pod:"calico-apiserver-7f8948c8d7-fq2q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie21320f1eca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.368 [INFO][5401] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.368 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" iface="eth0" netns="" Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.368 [INFO][5401] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.368 [INFO][5401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.389 [INFO][5408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.389 [INFO][5408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.389 [INFO][5408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.394 [WARNING][5408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.394 [INFO][5408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" HandleID="k8s-pod-network.fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--fq2q4-eth0" Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.396 [INFO][5408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.401733 containerd[1551]: 2024-12-13 01:11:58.398 [INFO][5401] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1" Dec 13 01:11:58.402243 containerd[1551]: time="2024-12-13T01:11:58.401777403Z" level=info msg="TearDown network for sandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\" successfully" Dec 13 01:11:58.406236 containerd[1551]: time="2024-12-13T01:11:58.406206563Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:11:58.406309 containerd[1551]: time="2024-12-13T01:11:58.406265295Z" level=info msg="RemovePodSandbox \"fa77e66aa38d44be86271e6ecea8c94153aaa016fab2651aa9ff72f6c58908d1\" returns successfully" Dec 13 01:11:58.406873 containerd[1551]: time="2024-12-13T01:11:58.406829791Z" level=info msg="StopPodSandbox for \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\"" Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.446 [WARNING][5430] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bws6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22974a96-e85c-4133-adca-45fb1d4311f1", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753", Pod:"csi-node-driver-bws6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b583b91d80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.446 [INFO][5430] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.446 [INFO][5430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" iface="eth0" netns="" Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.446 [INFO][5430] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.446 [INFO][5430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.474 [INFO][5437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.475 [INFO][5437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.475 [INFO][5437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.480 [WARNING][5437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.480 [INFO][5437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.481 [INFO][5437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.486662 containerd[1551]: 2024-12-13 01:11:58.483 [INFO][5430] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:58.487187 containerd[1551]: time="2024-12-13T01:11:58.486720091Z" level=info msg="TearDown network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\" successfully" Dec 13 01:11:58.487187 containerd[1551]: time="2024-12-13T01:11:58.486751200Z" level=info msg="StopPodSandbox for \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\" returns successfully" Dec 13 01:11:58.487263 containerd[1551]: time="2024-12-13T01:11:58.487240202Z" level=info msg="RemovePodSandbox for \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\"" Dec 13 01:11:58.487309 containerd[1551]: time="2024-12-13T01:11:58.487273316Z" level=info msg="Forcibly stopping sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\"" Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.524 [WARNING][5459] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bws6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22974a96-e85c-4133-adca-45fb1d4311f1", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2496ddfa55c428b6bdc89d18d08c1aca327b4f6042754335a7fae3d10cfe753", Pod:"csi-node-driver-bws6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b583b91d80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.524 [INFO][5459] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.524 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" iface="eth0" netns="" Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.524 [INFO][5459] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.524 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.544 [INFO][5466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.544 [INFO][5466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.544 [INFO][5466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.549 [WARNING][5466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.549 [INFO][5466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" HandleID="k8s-pod-network.ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Workload="localhost-k8s-csi--node--driver--bws6x-eth0" Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.550 [INFO][5466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.555305 containerd[1551]: 2024-12-13 01:11:58.553 [INFO][5459] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3" Dec 13 01:11:58.555779 containerd[1551]: time="2024-12-13T01:11:58.555351859Z" level=info msg="TearDown network for sandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\" successfully" Dec 13 01:11:58.559260 containerd[1551]: time="2024-12-13T01:11:58.559212285Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:11:58.559260 containerd[1551]: time="2024-12-13T01:11:58.559266108Z" level=info msg="RemovePodSandbox \"ae29459037d39a2bed82fe9e7f8daeca4f6c73f89d0770127f8b8547bd75cae3\" returns successfully" Dec 13 01:11:58.559725 containerd[1551]: time="2024-12-13T01:11:58.559687120Z" level=info msg="StopPodSandbox for \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\"" Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.593 [WARNING][5489] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--x6v2k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"28737369-3c2e-42da-8f56-354717ebe21b", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c", Pod:"coredns-76f75df574-x6v2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7965917c119", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.594 [INFO][5489] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.594 [INFO][5489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" iface="eth0" netns="" Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.594 [INFO][5489] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.594 [INFO][5489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.614 [INFO][5496] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.614 [INFO][5496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.614 [INFO][5496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.620 [WARNING][5496] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.620 [INFO][5496] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.621 [INFO][5496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.626510 containerd[1551]: 2024-12-13 01:11:58.623 [INFO][5489] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:58.626510 containerd[1551]: time="2024-12-13T01:11:58.626460055Z" level=info msg="TearDown network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\" successfully" Dec 13 01:11:58.626510 containerd[1551]: time="2024-12-13T01:11:58.626487398Z" level=info msg="StopPodSandbox for \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\" returns successfully" Dec 13 01:11:58.627067 containerd[1551]: time="2024-12-13T01:11:58.626946663Z" level=info msg="RemovePodSandbox for \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\"" Dec 13 01:11:58.627067 containerd[1551]: time="2024-12-13T01:11:58.626980618Z" level=info msg="Forcibly stopping sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\"" Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.659 [WARNING][5519] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--x6v2k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"28737369-3c2e-42da-8f56-354717ebe21b", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0620e78d7c6acc08e039b11932639050364f042dd0dfc9ffbfdb747ed0e8b7c", Pod:"coredns-76f75df574-x6v2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7965917c119", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.660 [INFO][5519] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.660 [INFO][5519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" iface="eth0" netns="" Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.660 [INFO][5519] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.660 [INFO][5519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.679 [INFO][5527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.679 [INFO][5527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.679 [INFO][5527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.684 [WARNING][5527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.684 [INFO][5527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" HandleID="k8s-pod-network.5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Workload="localhost-k8s-coredns--76f75df574--x6v2k-eth0" Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.685 [INFO][5527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.689981 containerd[1551]: 2024-12-13 01:11:58.687 [INFO][5519] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9" Dec 13 01:11:58.690639 containerd[1551]: time="2024-12-13T01:11:58.690022965Z" level=info msg="TearDown network for sandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\" successfully" Dec 13 01:11:58.693779 containerd[1551]: time="2024-12-13T01:11:58.693746540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:11:58.693841 containerd[1551]: time="2024-12-13T01:11:58.693789041Z" level=info msg="RemovePodSandbox \"5295beba40e9314cd39c30c3654de723e035f8fbef4a8b520f1132bd173465c9\" returns successfully" Dec 13 01:11:58.694325 containerd[1551]: time="2024-12-13T01:11:58.694281038Z" level=info msg="StopPodSandbox for \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\"" Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.727 [WARNING][5549] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0", GenerateName:"calico-kube-controllers-54c9b9587d-", Namespace:"calico-system", SelfLink:"", UID:"f6bdff36-140a-401a-9765-907c2bbf003f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c9b9587d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2", Pod:"calico-kube-controllers-54c9b9587d-vnvps", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d2ca991923", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.727 [INFO][5549] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.727 [INFO][5549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" iface="eth0" netns="" Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.727 [INFO][5549] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.727 [INFO][5549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.747 [INFO][5556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.748 [INFO][5556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.748 [INFO][5556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.753 [WARNING][5556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.753 [INFO][5556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.754 [INFO][5556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.759126 containerd[1551]: 2024-12-13 01:11:58.756 [INFO][5549] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:58.759599 containerd[1551]: time="2024-12-13T01:11:58.759182157Z" level=info msg="TearDown network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\" successfully" Dec 13 01:11:58.759599 containerd[1551]: time="2024-12-13T01:11:58.759213758Z" level=info msg="StopPodSandbox for \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\" returns successfully" Dec 13 01:11:58.759834 containerd[1551]: time="2024-12-13T01:11:58.759792641Z" level=info msg="RemovePodSandbox for \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\"" Dec 13 01:11:58.759896 containerd[1551]: time="2024-12-13T01:11:58.759840041Z" level=info msg="Forcibly stopping sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\"" Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.794 [WARNING][5579] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0", GenerateName:"calico-kube-controllers-54c9b9587d-", Namespace:"calico-system", SelfLink:"", UID:"f6bdff36-140a-401a-9765-907c2bbf003f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c9b9587d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2", Pod:"calico-kube-controllers-54c9b9587d-vnvps", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d2ca991923", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.794 [INFO][5579] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.794 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" iface="eth0" netns="" Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.794 [INFO][5579] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.794 [INFO][5579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.817 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.817 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.817 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.823 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.823 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" HandleID="k8s-pod-network.56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.824 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.829363 containerd[1551]: 2024-12-13 01:11:58.827 [INFO][5579] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c" Dec 13 01:11:58.829780 containerd[1551]: time="2024-12-13T01:11:58.829410878Z" level=info msg="TearDown network for sandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\" successfully" Dec 13 01:11:58.833685 containerd[1551]: time="2024-12-13T01:11:58.833657640Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:11:58.833762 containerd[1551]: time="2024-12-13T01:11:58.833703248Z" level=info msg="RemovePodSandbox \"56252000ea5caeeb012074f2ec67fd699e5c5d464207287e866c3d4ae2b9ff6c\" returns successfully" Dec 13 01:11:58.834131 containerd[1551]: time="2024-12-13T01:11:58.834110634Z" level=info msg="StopPodSandbox for \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\"" Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.865 [WARNING][5609] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0", GenerateName:"calico-apiserver-7f8948c8d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2a10eb6-3688-422c-b65b-01a7d73ed991", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8948c8d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f", Pod:"calico-apiserver-7f8948c8d7-gfxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6ed1627216", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.865 [INFO][5609] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.865 [INFO][5609] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" iface="eth0" netns="" Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.865 [INFO][5609] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.865 [INFO][5609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.885 [INFO][5616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.885 [INFO][5616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.885 [INFO][5616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.890 [WARNING][5616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.890 [INFO][5616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.891 [INFO][5616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.896728 containerd[1551]: 2024-12-13 01:11:58.894 [INFO][5609] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:58.896728 containerd[1551]: time="2024-12-13T01:11:58.896695178Z" level=info msg="TearDown network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\" successfully" Dec 13 01:11:58.896728 containerd[1551]: time="2024-12-13T01:11:58.896726037Z" level=info msg="StopPodSandbox for \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\" returns successfully" Dec 13 01:11:58.897674 containerd[1551]: time="2024-12-13T01:11:58.897620662Z" level=info msg="RemovePodSandbox for \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\"" Dec 13 01:11:58.897674 containerd[1551]: time="2024-12-13T01:11:58.897652322Z" level=info msg="Forcibly stopping sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\"" Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.930 [WARNING][5638] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0", GenerateName:"calico-apiserver-7f8948c8d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2a10eb6-3688-422c-b65b-01a7d73ed991", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 11, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8948c8d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"469413a6ac50f55d6446cbe6832ff6ec2246ea7264166e4e98417eb42873e08f", Pod:"calico-apiserver-7f8948c8d7-gfxjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6ed1627216", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.930 [INFO][5638] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.930 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" iface="eth0" netns="" Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.930 [INFO][5638] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.930 [INFO][5638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.952 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.953 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.953 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.958 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.958 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" HandleID="k8s-pod-network.56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Workload="localhost-k8s-calico--apiserver--7f8948c8d7--gfxjh-eth0" Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.959 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:11:58.964039 containerd[1551]: 2024-12-13 01:11:58.961 [INFO][5638] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3" Dec 13 01:11:58.964562 containerd[1551]: time="2024-12-13T01:11:58.964086553Z" level=info msg="TearDown network for sandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\" successfully" Dec 13 01:11:58.988841 containerd[1551]: time="2024-12-13T01:11:58.988811356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:11:58.988903 containerd[1551]: time="2024-12-13T01:11:58.988862333Z" level=info msg="RemovePodSandbox \"56f0763fa63763a52065de01a53a010cf5800d8f00e29164eb555b462225c9c3\" returns successfully" Dec 13 01:11:59.159657 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:56192.service - OpenSSH per-connection server daemon (10.0.0.1:56192). Dec 13 01:11:59.256016 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 56192 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:59.258129 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:59.262949 systemd-logind[1532]: New session 16 of user core. Dec 13 01:11:59.275703 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:11:59.410285 sshd[5654]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:59.414819 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:56192.service: Deactivated successfully. Dec 13 01:11:59.417385 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:11:59.418125 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:11:59.418983 systemd-logind[1532]: Removed session 16. Dec 13 01:12:02.979200 containerd[1551]: time="2024-12-13T01:12:02.979132598Z" level=info msg="StopContainer for \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\" with timeout 300 (s)" Dec 13 01:12:02.980036 containerd[1551]: time="2024-12-13T01:12:02.979994768Z" level=info msg="Stop container \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\" with signal terminated" Dec 13 01:12:03.280484 containerd[1551]: time="2024-12-13T01:12:03.280342695Z" level=info msg="StopContainer for \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\" with timeout 5 (s)" Dec 13 01:12:03.281387 containerd[1551]: time="2024-12-13T01:12:03.281185588Z" level=info msg="Stop container \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\" with signal terminated" Dec 13 01:12:03.281907 containerd[1551]: time="2024-12-13T01:12:03.281824894Z" level=info msg="StopContainer for \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\" with timeout 30 (s)" Dec 13 01:12:03.282986 containerd[1551]: time="2024-12-13T01:12:03.282814646Z" level=info msg="Stop container \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\" with signal terminated" Dec 13 01:12:03.340035 containerd[1551]: time="2024-12-13T01:12:03.339604896Z" level=info msg="shim disconnected" id=7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668 namespace=k8s.io Dec 13 01:12:03.340035 containerd[1551]: time="2024-12-13T01:12:03.339673647Z" level=warning msg="cleaning up after shim disconnected" id=7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668 namespace=k8s.io Dec 13 01:12:03.340035 containerd[1551]: time="2024-12-13T01:12:03.339684137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:03.341635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668-rootfs.mount: Deactivated successfully. Dec 13 01:12:03.342480 containerd[1551]: time="2024-12-13T01:12:03.342403619Z" level=info msg="shim disconnected" id=e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474 namespace=k8s.io Dec 13 01:12:03.342480 containerd[1551]: time="2024-12-13T01:12:03.342449265Z" level=warning msg="cleaning up after shim disconnected" id=e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474 namespace=k8s.io Dec 13 01:12:03.342480 containerd[1551]: time="2024-12-13T01:12:03.342457220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:03.346472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474-rootfs.mount: Deactivated successfully. Dec 13 01:12:03.360471 containerd[1551]: time="2024-12-13T01:12:03.360362739Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:12:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:12:03.375837 containerd[1551]: time="2024-12-13T01:12:03.375782440Z" level=info msg="StopContainer for \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\" returns successfully" Dec 13 01:12:03.376474 containerd[1551]: time="2024-12-13T01:12:03.376438387Z" level=info msg="StopPodSandbox for \"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2\"" Dec 13 01:12:03.376526 containerd[1551]: time="2024-12-13T01:12:03.376496849Z" level=info msg="Container to stop \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:12:03.379964 containerd[1551]: time="2024-12-13T01:12:03.379787586Z" level=info msg="StopContainer for \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\" returns successfully" Dec 13 01:12:03.380234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2-shm.mount: Deactivated successfully. Dec 13 01:12:03.380473 containerd[1551]: time="2024-12-13T01:12:03.380432753Z" level=info msg="StopPodSandbox for \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\"" Dec 13 01:12:03.380538 containerd[1551]: time="2024-12-13T01:12:03.380508908Z" level=info msg="Container to stop \"cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:12:03.380538 containerd[1551]: time="2024-12-13T01:12:03.380528345Z" level=info msg="Container to stop \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:12:03.380698 containerd[1551]: time="2024-12-13T01:12:03.380539105Z" level=info msg="Container to stop \"669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:12:03.385965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae-shm.mount: Deactivated successfully. Dec 13 01:12:03.416189 containerd[1551]: time="2024-12-13T01:12:03.415945193Z" level=info msg="shim disconnected" id=853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2 namespace=k8s.io Dec 13 01:12:03.416189 containerd[1551]: time="2024-12-13T01:12:03.416010456Z" level=warning msg="cleaning up after shim disconnected" id=853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2 namespace=k8s.io Dec 13 01:12:03.416189 containerd[1551]: time="2024-12-13T01:12:03.416020045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:03.421553 containerd[1551]: time="2024-12-13T01:12:03.421325073Z" level=info msg="shim disconnected" id=25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae namespace=k8s.io Dec 13 01:12:03.421553 containerd[1551]: time="2024-12-13T01:12:03.421553006Z" level=warning msg="cleaning up after shim disconnected" id=25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae namespace=k8s.io Dec 13 01:12:03.421690 containerd[1551]: time="2024-12-13T01:12:03.421564357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:03.431764 containerd[1551]: time="2024-12-13T01:12:03.431257270Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:12:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:12:03.512120 containerd[1551]: time="2024-12-13T01:12:03.512075741Z" level=info msg="TearDown network for sandbox \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\" successfully" Dec 13 01:12:03.512403 containerd[1551]: time="2024-12-13T01:12:03.512361234Z" level=info msg="StopPodSandbox for \"25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae\" returns successfully" Dec 13 01:12:03.617914 systemd-networkd[1241]: cali3d2ca991923: Link DOWN Dec 13 01:12:03.617923 systemd-networkd[1241]: cali3d2ca991923: Lost carrier Dec 13 01:12:03.649419 kubelet[2731]: I1213 01:12:03.647880 2731 topology_manager.go:215] "Topology Admit Handler" podUID="1ceea34f-3611-47b5-99d4-5a9c42f34af6" podNamespace="calico-system" podName="calico-node-vhqs9" Dec 13 01:12:03.649419 kubelet[2731]: E1213 01:12:03.647960 2731 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9510896b-0814-4622-9cc7-1bf1c95421d6" containerName="install-cni" Dec 13 01:12:03.649419 kubelet[2731]: E1213 01:12:03.647974 2731 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9510896b-0814-4622-9cc7-1bf1c95421d6" containerName="calico-node" Dec 13 01:12:03.649419 kubelet[2731]: E1213 01:12:03.647982 2731 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9510896b-0814-4622-9cc7-1bf1c95421d6" containerName="flexvol-driver" Dec 13 01:12:03.649419 kubelet[2731]: I1213 01:12:03.648016 2731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9510896b-0814-4622-9cc7-1bf1c95421d6" containerName="calico-node" Dec 13 01:12:03.676573 kubelet[2731]: I1213 01:12:03.676535 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-net-dir\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676573 kubelet[2731]: I1213 01:12:03.676586 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9510896b-0814-4622-9cc7-1bf1c95421d6-tigera-ca-bundle\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676774 kubelet[2731]: I1213 01:12:03.676605 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-var-run-calico\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676774 kubelet[2731]: I1213 01:12:03.676621 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-bin-dir\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676774 kubelet[2731]: I1213 01:12:03.676639 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-log-dir\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676774 kubelet[2731]: I1213 01:12:03.676654 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-xtables-lock\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676774 kubelet[2731]: I1213 01:12:03.676668 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-lib-modules\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676774 kubelet[2731]: I1213 01:12:03.676687 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-policysync\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676969 kubelet[2731]: I1213 01:12:03.676707 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-flexvol-driver-host\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676969 kubelet[2731]: I1213 01:12:03.676701 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.676969 kubelet[2731]: I1213 01:12:03.676729 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5xwj\" (UniqueName: \"kubernetes.io/projected/9510896b-0814-4622-9cc7-1bf1c95421d6-kube-api-access-j5xwj\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676969 kubelet[2731]: I1213 01:12:03.676810 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-var-lib-calico\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676969 kubelet[2731]: I1213 01:12:03.676868 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9510896b-0814-4622-9cc7-1bf1c95421d6-node-certs\") pod \"9510896b-0814-4622-9cc7-1bf1c95421d6\" (UID: \"9510896b-0814-4622-9cc7-1bf1c95421d6\") " Dec 13 01:12:03.676969 kubelet[2731]: I1213 01:12:03.676933 2731 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.677627 kubelet[2731]: I1213 01:12:03.677051 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.677627 kubelet[2731]: I1213 01:12:03.677243 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.677627 kubelet[2731]: I1213 01:12:03.677277 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.677627 kubelet[2731]: I1213 01:12:03.677303 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.677627 kubelet[2731]: I1213 01:12:03.677327 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.677781 kubelet[2731]: I1213 01:12:03.677353 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.677781 kubelet[2731]: I1213 01:12:03.677399 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-policysync" (OuterVolumeSpecName: "policysync") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.677920 kubelet[2731]: I1213 01:12:03.677888 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:12:03.680156 kubelet[2731]: I1213 01:12:03.680131 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9510896b-0814-4622-9cc7-1bf1c95421d6-kube-api-access-j5xwj" (OuterVolumeSpecName: "kube-api-access-j5xwj") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "kube-api-access-j5xwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:12:03.680580 kubelet[2731]: I1213 01:12:03.680471 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9510896b-0814-4622-9cc7-1bf1c95421d6-node-certs" (OuterVolumeSpecName: "node-certs") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:12:03.685103 kubelet[2731]: I1213 01:12:03.685062 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9510896b-0814-4622-9cc7-1bf1c95421d6-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9510896b-0814-4622-9cc7-1bf1c95421d6" (UID: "9510896b-0814-4622-9cc7-1bf1c95421d6"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:12:03.778026 kubelet[2731]: I1213 01:12:03.777972 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-policysync\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778026 kubelet[2731]: I1213 01:12:03.778019 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnzd\" (UniqueName: \"kubernetes.io/projected/1ceea34f-3611-47b5-99d4-5a9c42f34af6-kube-api-access-dlnzd\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778026 kubelet[2731]: I1213 01:12:03.778040 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-lib-modules\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778226 kubelet[2731]: I1213 01:12:03.778060 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1ceea34f-3611-47b5-99d4-5a9c42f34af6-node-certs\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778226 kubelet[2731]: I1213 01:12:03.778090 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-cni-net-dir\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778226 kubelet[2731]: I1213 01:12:03.778108 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-var-lib-calico\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778226 kubelet[2731]: I1213 01:12:03.778158 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-cni-bin-dir\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778226 kubelet[2731]: I1213 01:12:03.778196 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-flexvol-driver-host\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778344 kubelet[2731]: I1213 01:12:03.778238 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-xtables-lock\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778344 kubelet[2731]: I1213 01:12:03.778258 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-var-run-calico\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778344 kubelet[2731]: I1213 01:12:03.778301 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1ceea34f-3611-47b5-99d4-5a9c42f34af6-cni-log-dir\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778344 kubelet[2731]: I1213 01:12:03.778323 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ceea34f-3611-47b5-99d4-5a9c42f34af6-tigera-ca-bundle\") pod \"calico-node-vhqs9\" (UID: \"1ceea34f-3611-47b5-99d4-5a9c42f34af6\") " pod="calico-system/calico-node-vhqs9" Dec 13 01:12:03.778461 kubelet[2731]: I1213 01:12:03.778424 2731 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778461 kubelet[2731]: I1213 01:12:03.778439 2731 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j5xwj\" (UniqueName: \"kubernetes.io/projected/9510896b-0814-4622-9cc7-1bf1c95421d6-kube-api-access-j5xwj\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778461 kubelet[2731]: I1213 01:12:03.778449 2731 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778461 kubelet[2731]: I1213 01:12:03.778460 2731 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-var-run-calico\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778554 kubelet[2731]: I1213 01:12:03.778469 2731 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778554 kubelet[2731]: I1213 01:12:03.778478 2731 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778554 kubelet[2731]: I1213 01:12:03.778487 2731 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778554 kubelet[2731]: I1213 01:12:03.778496 2731 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-policysync\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778554 kubelet[2731]: I1213 01:12:03.778505 2731 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9510896b-0814-4622-9cc7-1bf1c95421d6-node-certs\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778554 kubelet[2731]: I1213 01:12:03.778514 2731 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9510896b-0814-4622-9cc7-1bf1c95421d6-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.778554 kubelet[2731]: I1213 01:12:03.778522 2731 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9510896b-0814-4622-9cc7-1bf1c95421d6-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:03.954554 kubelet[2731]: E1213 01:12:03.954517 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:03.955141 containerd[1551]: time="2024-12-13T01:12:03.955056428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vhqs9,Uid:1ceea34f-3611-47b5-99d4-5a9c42f34af6,Namespace:calico-system,Attempt:0,}" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.616 [INFO][5845] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.616 [INFO][5845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" iface="eth0" netns="/var/run/netns/cni-371ee7a9-777d-881a-574d-3fa07488044f" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.617 [INFO][5845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" iface="eth0" netns="/var/run/netns/cni-371ee7a9-777d-881a-574d-3fa07488044f" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.638 [INFO][5845] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" after=21.615122ms iface="eth0" netns="/var/run/netns/cni-371ee7a9-777d-881a-574d-3fa07488044f" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.638 [INFO][5845] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.638 [INFO][5845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.658 [INFO][5855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" HandleID="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.658 [INFO][5855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:03.659 [INFO][5855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:04.021 [INFO][5855] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" HandleID="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:04.021 [INFO][5855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" HandleID="k8s-pod-network.853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Workload="localhost-k8s-calico--kube--controllers--54c9b9587d--vnvps-eth0" Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:04.022 [INFO][5855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:12:04.027499 containerd[1551]: 2024-12-13 01:12:04.025 [INFO][5845] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2" Dec 13 01:12:04.028474 containerd[1551]: time="2024-12-13T01:12:04.028397543Z" level=info msg="TearDown network for sandbox \"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2\" successfully" Dec 13 01:12:04.028474 containerd[1551]: time="2024-12-13T01:12:04.028432760Z" level=info msg="StopPodSandbox for \"853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2\" returns successfully" Dec 13 01:12:04.118222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-853a8d839ce32fcdcf7cd4721e25ba4f4c42d63639e03e6e3699409fcdc1d4e2-rootfs.mount: Deactivated successfully. Dec 13 01:12:04.124644 systemd[1]: run-netns-cni\x2d371ee7a9\x2d777d\x2d881a\x2d574d\x2d3fa07488044f.mount: Deactivated successfully. Dec 13 01:12:04.124788 systemd[1]: var-lib-kubelet-pods-9510896b\x2d0814\x2d4622\x2d9cc7\x2d1bf1c95421d6-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Dec 13 01:12:04.124940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25984e03888bb02d88ae7b9f8ab3581634ffbe7d6fd696978af2ec36c69cc4ae-rootfs.mount: Deactivated successfully. Dec 13 01:12:04.125082 systemd[1]: var-lib-kubelet-pods-9510896b\x2d0814\x2d4622\x2d9cc7\x2d1bf1c95421d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj5xwj.mount: Deactivated successfully. Dec 13 01:12:04.125220 systemd[1]: var-lib-kubelet-pods-9510896b\x2d0814\x2d4622\x2d9cc7\x2d1bf1c95421d6-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 13 01:12:04.180109 kubelet[2731]: I1213 01:12:04.180057 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbh48\" (UniqueName: \"kubernetes.io/projected/f6bdff36-140a-401a-9765-907c2bbf003f-kube-api-access-fbh48\") pod \"f6bdff36-140a-401a-9765-907c2bbf003f\" (UID: \"f6bdff36-140a-401a-9765-907c2bbf003f\") " Dec 13 01:12:04.180226 kubelet[2731]: I1213 01:12:04.180130 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6bdff36-140a-401a-9765-907c2bbf003f-tigera-ca-bundle\") pod \"f6bdff36-140a-401a-9765-907c2bbf003f\" (UID: \"f6bdff36-140a-401a-9765-907c2bbf003f\") " Dec 13 01:12:04.186478 kubelet[2731]: I1213 01:12:04.185637 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6bdff36-140a-401a-9765-907c2bbf003f-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f6bdff36-140a-401a-9765-907c2bbf003f" (UID: "f6bdff36-140a-401a-9765-907c2bbf003f"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:12:04.187318 systemd[1]: var-lib-kubelet-pods-f6bdff36\x2d140a\x2d401a\x2d9765\x2d907c2bbf003f-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Dec 13 01:12:04.187706 kubelet[2731]: I1213 01:12:04.187677 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bdff36-140a-401a-9765-907c2bbf003f-kube-api-access-fbh48" (OuterVolumeSpecName: "kube-api-access-fbh48") pod "f6bdff36-140a-401a-9765-907c2bbf003f" (UID: "f6bdff36-140a-401a-9765-907c2bbf003f"). InnerVolumeSpecName "kube-api-access-fbh48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:12:04.187904 systemd[1]: var-lib-kubelet-pods-f6bdff36\x2d140a\x2d401a\x2d9765\x2d907c2bbf003f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfbh48.mount: Deactivated successfully. Dec 13 01:12:04.192715 containerd[1551]: time="2024-12-13T01:12:04.192621740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:12:04.192715 containerd[1551]: time="2024-12-13T01:12:04.192670241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:12:04.192715 containerd[1551]: time="2024-12-13T01:12:04.192683807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:04.192866 containerd[1551]: time="2024-12-13T01:12:04.192772936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:04.234243 containerd[1551]: time="2024-12-13T01:12:04.234001867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vhqs9,Uid:1ceea34f-3611-47b5-99d4-5a9c42f34af6,Namespace:calico-system,Attempt:0,} returns sandbox id \"88415d5cf3b996c0a5fbb037b412a7bcf91e2c7b23eb806fdb2ae794d5b22297\"" Dec 13 01:12:04.236231 kubelet[2731]: E1213 01:12:04.235824 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:04.239331 containerd[1551]: time="2024-12-13T01:12:04.239286511Z" level=info msg="CreateContainer within sandbox \"88415d5cf3b996c0a5fbb037b412a7bcf91e2c7b23eb806fdb2ae794d5b22297\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:12:04.281027 kubelet[2731]: I1213 01:12:04.280977 2731 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6bdff36-140a-401a-9765-907c2bbf003f-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:04.281027 kubelet[2731]: I1213 01:12:04.281013 2731 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fbh48\" (UniqueName: \"kubernetes.io/projected/f6bdff36-140a-401a-9765-907c2bbf003f-kube-api-access-fbh48\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:04.362340 containerd[1551]: time="2024-12-13T01:12:04.362283621Z" level=info msg="CreateContainer within sandbox \"88415d5cf3b996c0a5fbb037b412a7bcf91e2c7b23eb806fdb2ae794d5b22297\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a9657fd5b9a270a53300c60dfdd7cce865c5a47a72ddc2a9b954260441d33291\"" Dec 13 01:12:04.369213 containerd[1551]: time="2024-12-13T01:12:04.369156796Z" level=info msg="StartContainer for \"a9657fd5b9a270a53300c60dfdd7cce865c5a47a72ddc2a9b954260441d33291\"" Dec 13 01:12:04.420524 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:56194.service - OpenSSH per-connection server daemon (10.0.0.1:56194). Dec 13 01:12:04.436299 containerd[1551]: time="2024-12-13T01:12:04.436167885Z" level=info msg="StartContainer for \"a9657fd5b9a270a53300c60dfdd7cce865c5a47a72ddc2a9b954260441d33291\" returns successfully" Dec 13 01:12:04.458265 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 56194 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:04.462600 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:04.465773 kubelet[2731]: E1213 01:12:04.465739 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:04.473919 kubelet[2731]: I1213 01:12:04.472639 2731 scope.go:117] "RemoveContainer" containerID="e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474" Dec 13 01:12:04.474253 systemd-logind[1532]: New session 17 of user core. Dec 13 01:12:04.478718 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:12:04.480988 containerd[1551]: time="2024-12-13T01:12:04.480751250Z" level=info msg="RemoveContainer for \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\"" Dec 13 01:12:04.498533 containerd[1551]: time="2024-12-13T01:12:04.497447879Z" level=info msg="RemoveContainer for \"e0c996fc84f5c289b9708638c1f2b8ec352814609bf40056bb9ae60896bfa474\" returns successfully" Dec 13 01:12:04.499603 kubelet[2731]: I1213 01:12:04.499576 2731 scope.go:117] "RemoveContainer" containerID="cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656" Dec 13 01:12:04.504030 containerd[1551]: time="2024-12-13T01:12:04.503989533Z" level=info msg="RemoveContainer for \"cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656\"" Dec 13 01:12:04.515014 containerd[1551]: time="2024-12-13T01:12:04.514504080Z" level=info msg="RemoveContainer for \"cff856a7d882f4851b383cdb958d76b8f2ae5d0a4c1413df971f32a679504656\" returns successfully" Dec 13 01:12:04.515501 kubelet[2731]: I1213 01:12:04.515270 2731 scope.go:117] "RemoveContainer" containerID="669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a" Dec 13 01:12:04.518836 containerd[1551]: time="2024-12-13T01:12:04.518718561Z" level=info msg="RemoveContainer for \"669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a\"" Dec 13 01:12:04.530920 containerd[1551]: time="2024-12-13T01:12:04.530838409Z" level=info msg="shim disconnected" id=a9657fd5b9a270a53300c60dfdd7cce865c5a47a72ddc2a9b954260441d33291 namespace=k8s.io Dec 13 01:12:04.530920 containerd[1551]: time="2024-12-13T01:12:04.530895367Z" level=warning msg="cleaning up after shim disconnected" id=a9657fd5b9a270a53300c60dfdd7cce865c5a47a72ddc2a9b954260441d33291 namespace=k8s.io Dec 13 01:12:04.530920 containerd[1551]: time="2024-12-13T01:12:04.530903774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:04.532269 containerd[1551]: time="2024-12-13T01:12:04.531967265Z" level=info msg="RemoveContainer for \"669c52e542dd1b312511e97a8662c65556d0d1f559b84ff960440ddf5408237a\" returns successfully" Dec 13 01:12:04.535308 kubelet[2731]: I1213 01:12:04.535272 2731 scope.go:117] "RemoveContainer" containerID="7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668" Dec 13 01:12:04.545786 containerd[1551]: time="2024-12-13T01:12:04.545685271Z" level=info msg="RemoveContainer for \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\"" Dec 13 01:12:04.553291 containerd[1551]: time="2024-12-13T01:12:04.552840602Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:12:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:12:04.559610 containerd[1551]: time="2024-12-13T01:12:04.559578550Z" level=info msg="RemoveContainer for \"7b25bdb1b18af5694d740e53a1f743bd5422510a927458e86acdc6ace8d07668\" returns successfully" Dec 13 01:12:04.621424 sshd[5934]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:04.626191 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:56194.service: Deactivated successfully. Dec 13 01:12:04.629162 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:12:04.629305 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:12:04.630635 systemd-logind[1532]: Removed session 17. Dec 13 01:12:05.484455 kubelet[2731]: E1213 01:12:05.484421 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:05.491343 containerd[1551]: time="2024-12-13T01:12:05.491289305Z" level=info msg="CreateContainer within sandbox \"88415d5cf3b996c0a5fbb037b412a7bcf91e2c7b23eb806fdb2ae794d5b22297\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:12:05.519040 containerd[1551]: time="2024-12-13T01:12:05.518982176Z" level=info msg="CreateContainer within sandbox \"88415d5cf3b996c0a5fbb037b412a7bcf91e2c7b23eb806fdb2ae794d5b22297\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e02972d5e992729150b6e89fd6b362f48964e9c12b8499f47cf489a42df73634\"" Dec 13 01:12:05.519641 containerd[1551]: time="2024-12-13T01:12:05.519613005Z" level=info msg="StartContainer for \"e02972d5e992729150b6e89fd6b362f48964e9c12b8499f47cf489a42df73634\"" Dec 13 01:12:05.584078 containerd[1551]: time="2024-12-13T01:12:05.584034657Z" level=info msg="StartContainer for \"e02972d5e992729150b6e89fd6b362f48964e9c12b8499f47cf489a42df73634\" returns successfully" Dec 13 01:12:06.035582 kubelet[2731]: I1213 01:12:06.035537 2731 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9510896b-0814-4622-9cc7-1bf1c95421d6" path="/var/lib/kubelet/pods/9510896b-0814-4622-9cc7-1bf1c95421d6/volumes" Dec 13 01:12:06.036331 kubelet[2731]: I1213 01:12:06.036300 2731 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f6bdff36-140a-401a-9765-907c2bbf003f" path="/var/lib/kubelet/pods/f6bdff36-140a-401a-9765-907c2bbf003f/volumes" Dec 13 01:12:06.075907 containerd[1551]: time="2024-12-13T01:12:06.075823865Z" level=info msg="shim disconnected" id=e02972d5e992729150b6e89fd6b362f48964e9c12b8499f47cf489a42df73634 namespace=k8s.io Dec 13 01:12:06.075907 containerd[1551]: time="2024-12-13T01:12:06.075899158Z" level=warning msg="cleaning up after shim disconnected" id=e02972d5e992729150b6e89fd6b362f48964e9c12b8499f47cf489a42df73634 namespace=k8s.io Dec 13 01:12:06.075907 containerd[1551]: time="2024-12-13T01:12:06.075912032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:06.111322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e02972d5e992729150b6e89fd6b362f48964e9c12b8499f47cf489a42df73634-rootfs.mount: Deactivated successfully. Dec 13 01:12:06.489080 kubelet[2731]: E1213 01:12:06.489028 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:06.506630 containerd[1551]: time="2024-12-13T01:12:06.506587837Z" level=info msg="CreateContainer within sandbox \"88415d5cf3b996c0a5fbb037b412a7bcf91e2c7b23eb806fdb2ae794d5b22297\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:12:06.526071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437658033.mount: Deactivated successfully. Dec 13 01:12:06.526509 containerd[1551]: time="2024-12-13T01:12:06.526452710Z" level=info msg="CreateContainer within sandbox \"88415d5cf3b996c0a5fbb037b412a7bcf91e2c7b23eb806fdb2ae794d5b22297\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2bb109e36d6ed7eea37b3befd34ebbc7f4bdd47ce84ed8cc5bcb46bd8a1a3fe1\"" Dec 13 01:12:06.527794 containerd[1551]: time="2024-12-13T01:12:06.527083178Z" level=info msg="StartContainer for \"2bb109e36d6ed7eea37b3befd34ebbc7f4bdd47ce84ed8cc5bcb46bd8a1a3fe1\"" Dec 13 01:12:06.592742 containerd[1551]: time="2024-12-13T01:12:06.592700286Z" level=info msg="StartContainer for \"2bb109e36d6ed7eea37b3befd34ebbc7f4bdd47ce84ed8cc5bcb46bd8a1a3fe1\" returns successfully" Dec 13 01:12:07.516185 kubelet[2731]: E1213 01:12:07.516153 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:07.536883 systemd[1]: run-containerd-runc-k8s.io-2bb109e36d6ed7eea37b3befd34ebbc7f4bdd47ce84ed8cc5bcb46bd8a1a3fe1-runc.ApbAv7.mount: Deactivated successfully. Dec 13 01:12:07.688362 kubelet[2731]: I1213 01:12:07.688232 2731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-vhqs9" podStartSLOduration=4.68818434 podStartE2EDuration="4.68818434s" podCreationTimestamp="2024-12-13 01:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:12:07.688176976 +0000 UTC m=+69.742108187" watchObservedRunningTime="2024-12-13 01:12:07.68818434 +0000 UTC m=+69.742115621" Dec 13 01:12:08.518470 kubelet[2731]: E1213 01:12:08.518436 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:08.732387 containerd[1551]: time="2024-12-13T01:12:08.732297124Z" level=info msg="shim disconnected" id=2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6 namespace=k8s.io Dec 13 01:12:08.732387 containerd[1551]: time="2024-12-13T01:12:08.732380072Z" level=warning msg="cleaning up after shim disconnected" id=2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6 namespace=k8s.io Dec 13 01:12:08.732387 containerd[1551]: time="2024-12-13T01:12:08.732393567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:08.735899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6-rootfs.mount: Deactivated successfully. Dec 13 01:12:08.747308 containerd[1551]: time="2024-12-13T01:12:08.747249738Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:12:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:12:08.763721 containerd[1551]: time="2024-12-13T01:12:08.763679094Z" level=info msg="StopContainer for \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\" returns successfully" Dec 13 01:12:08.764295 containerd[1551]: time="2024-12-13T01:12:08.764254306Z" level=info msg="StopPodSandbox for \"5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf\"" Dec 13 01:12:08.764349 containerd[1551]: time="2024-12-13T01:12:08.764299612Z" level=info msg="Container to stop \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:12:08.768892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf-shm.mount: Deactivated successfully. Dec 13 01:12:08.800503 containerd[1551]: time="2024-12-13T01:12:08.800415758Z" level=info msg="shim disconnected" id=5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf namespace=k8s.io Dec 13 01:12:08.800503 containerd[1551]: time="2024-12-13T01:12:08.800491853Z" level=warning msg="cleaning up after shim disconnected" id=5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf namespace=k8s.io Dec 13 01:12:08.800503 containerd[1551]: time="2024-12-13T01:12:08.800507893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:08.803381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf-rootfs.mount: Deactivated successfully. Dec 13 01:12:08.823783 containerd[1551]: time="2024-12-13T01:12:08.823727018Z" level=info msg="TearDown network for sandbox \"5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf\" successfully" Dec 13 01:12:08.823783 containerd[1551]: time="2024-12-13T01:12:08.823763287Z" level=info msg="StopPodSandbox for \"5d19cb12346cc98e72fb3a1f68c03c2ec85537af586e31ea3db904f97348d1bf\" returns successfully" Dec 13 01:12:08.914544 kubelet[2731]: I1213 01:12:08.914479 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8cf63954-e97e-45c1-bb13-1b47bbb699cb-typha-certs\") pod \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\" (UID: \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\") " Dec 13 01:12:08.914544 kubelet[2731]: I1213 01:12:08.914542 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cf63954-e97e-45c1-bb13-1b47bbb699cb-tigera-ca-bundle\") pod \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\" (UID: \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\") " Dec 13 01:12:08.914756 kubelet[2731]: I1213 01:12:08.914589 2731 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp2n5\" (UniqueName: \"kubernetes.io/projected/8cf63954-e97e-45c1-bb13-1b47bbb699cb-kube-api-access-mp2n5\") pod \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\" (UID: \"8cf63954-e97e-45c1-bb13-1b47bbb699cb\") " Dec 13 01:12:08.918861 kubelet[2731]: I1213 01:12:08.918820 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf63954-e97e-45c1-bb13-1b47bbb699cb-kube-api-access-mp2n5" (OuterVolumeSpecName: "kube-api-access-mp2n5") pod "8cf63954-e97e-45c1-bb13-1b47bbb699cb" (UID: "8cf63954-e97e-45c1-bb13-1b47bbb699cb"). InnerVolumeSpecName "kube-api-access-mp2n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:12:08.918924 kubelet[2731]: I1213 01:12:08.918874 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf63954-e97e-45c1-bb13-1b47bbb699cb-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "8cf63954-e97e-45c1-bb13-1b47bbb699cb" (UID: "8cf63954-e97e-45c1-bb13-1b47bbb699cb"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:12:08.920618 kubelet[2731]: I1213 01:12:08.920570 2731 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cf63954-e97e-45c1-bb13-1b47bbb699cb-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8cf63954-e97e-45c1-bb13-1b47bbb699cb" (UID: "8cf63954-e97e-45c1-bb13-1b47bbb699cb"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:12:08.920830 systemd[1]: var-lib-kubelet-pods-8cf63954\x2de97e\x2d45c1\x2dbb13\x2d1b47bbb699cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmp2n5.mount: Deactivated successfully. Dec 13 01:12:08.921022 systemd[1]: var-lib-kubelet-pods-8cf63954\x2de97e\x2d45c1\x2dbb13\x2d1b47bbb699cb-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Dec 13 01:12:09.015526 kubelet[2731]: I1213 01:12:09.015476 2731 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mp2n5\" (UniqueName: \"kubernetes.io/projected/8cf63954-e97e-45c1-bb13-1b47bbb699cb-kube-api-access-mp2n5\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:09.015526 kubelet[2731]: I1213 01:12:09.015516 2731 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8cf63954-e97e-45c1-bb13-1b47bbb699cb-typha-certs\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:09.015526 kubelet[2731]: I1213 01:12:09.015528 2731 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cf63954-e97e-45c1-bb13-1b47bbb699cb-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 01:12:09.522411 kubelet[2731]: I1213 01:12:09.521546 2731 scope.go:117] "RemoveContainer" containerID="2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6" Dec 13 01:12:09.524948 containerd[1551]: time="2024-12-13T01:12:09.524786477Z" level=info msg="RemoveContainer for \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\"" Dec 13 01:12:09.536934 systemd[1]: var-lib-kubelet-pods-8cf63954\x2de97e\x2d45c1\x2dbb13\x2d1b47bbb699cb-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Dec 13 01:12:09.608244 containerd[1551]: time="2024-12-13T01:12:09.608187499Z" level=info msg="RemoveContainer for \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\" returns successfully" Dec 13 01:12:09.608578 kubelet[2731]: I1213 01:12:09.608547 2731 scope.go:117] "RemoveContainer" containerID="2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6" Dec 13 01:12:09.618867 containerd[1551]: time="2024-12-13T01:12:09.612506745Z" level=error msg="ContainerStatus for \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\": not found" Dec 13 01:12:09.619078 kubelet[2731]: E1213 01:12:09.619040 2731 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\": not found" containerID="2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6" Dec 13 01:12:09.619224 kubelet[2731]: I1213 01:12:09.619099 2731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6"} err="failed to get container status \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\": rpc error: code = NotFound desc = an error occurred when try to find container \"2eb92fcaca297b9a9cb3094a49ead81663a1a69e7caa11bf7e67d2ebb816cbe6\": not found" Dec 13 01:12:09.632586 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:58444.service - OpenSSH per-connection server daemon (10.0.0.1:58444). Dec 13 01:12:09.666781 sshd[6420]: Accepted publickey for core from 10.0.0.1 port 58444 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:09.669293 sshd[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:09.674296 systemd-logind[1532]: New session 18 of user core. Dec 13 01:12:09.678636 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:12:09.857506 sshd[6420]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:09.863651 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:58454.service - OpenSSH per-connection server daemon (10.0.0.1:58454). Dec 13 01:12:09.864640 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:58444.service: Deactivated successfully. Dec 13 01:12:09.867184 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:12:09.868460 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:12:09.869845 systemd-logind[1532]: Removed session 18. Dec 13 01:12:09.897789 sshd[6435]: Accepted publickey for core from 10.0.0.1 port 58454 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:09.899564 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:09.904605 systemd-logind[1532]: New session 19 of user core. Dec 13 01:12:09.913857 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:12:10.034730 kubelet[2731]: I1213 01:12:10.034690 2731 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8cf63954-e97e-45c1-bb13-1b47bbb699cb" path="/var/lib/kubelet/pods/8cf63954-e97e-45c1-bb13-1b47bbb699cb/volumes" Dec 13 01:12:10.550422 sshd[6435]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:10.557594 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:58470.service - OpenSSH per-connection server daemon (10.0.0.1:58470). Dec 13 01:12:10.558059 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:58454.service: Deactivated successfully. Dec 13 01:12:10.562868 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:12:10.563121 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:12:10.564488 systemd-logind[1532]: Removed session 19. Dec 13 01:12:10.587456 sshd[6449]: Accepted publickey for core from 10.0.0.1 port 58470 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:10.589204 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:10.593391 systemd-logind[1532]: New session 20 of user core. Dec 13 01:12:10.605676 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:12:12.850396 sshd[6449]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:12.864756 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:58478.service - OpenSSH per-connection server daemon (10.0.0.1:58478). Dec 13 01:12:12.866070 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:58470.service: Deactivated successfully. Dec 13 01:12:12.872334 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:12:12.873385 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:12:12.876420 systemd-logind[1532]: Removed session 20. Dec 13 01:12:12.899734 sshd[6471]: Accepted publickey for core from 10.0.0.1 port 58478 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:12.901236 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:12.906257 systemd-logind[1532]: New session 21 of user core. Dec 13 01:12:12.915622 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:12:13.132002 sshd[6471]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:13.143150 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:58488.service - OpenSSH per-connection server daemon (10.0.0.1:58488). Dec 13 01:12:13.143824 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:58478.service: Deactivated successfully. Dec 13 01:12:13.147691 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:12:13.148287 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:12:13.150957 systemd-logind[1532]: Removed session 21. Dec 13 01:12:13.171847 sshd[6485]: Accepted publickey for core from 10.0.0.1 port 58488 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:13.173448 sshd[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:13.178174 systemd-logind[1532]: New session 22 of user core. Dec 13 01:12:13.186625 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:12:13.302544 sshd[6485]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:13.307010 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:58488.service: Deactivated successfully. Dec 13 01:12:13.309311 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:12:13.309348 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:12:13.310310 systemd-logind[1532]: Removed session 22. Dec 13 01:12:15.032585 kubelet[2731]: E1213 01:12:15.032519 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:16.466736 kubelet[2731]: I1213 01:12:16.466669 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:12:18.311675 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:37788.service - OpenSSH per-connection server daemon (10.0.0.1:37788). Dec 13 01:12:18.339807 sshd[6516]: Accepted publickey for core from 10.0.0.1 port 37788 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:18.341991 sshd[6516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:18.347527 systemd-logind[1532]: New session 23 of user core. Dec 13 01:12:18.352732 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:12:18.464813 sshd[6516]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:18.468665 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:37788.service: Deactivated successfully. Dec 13 01:12:18.471258 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:12:18.472046 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:12:18.473052 systemd-logind[1532]: Removed session 23. Dec 13 01:12:23.476786 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:37802.service - OpenSSH per-connection server daemon (10.0.0.1:37802). Dec 13 01:12:23.517035 sshd[6539]: Accepted publickey for core from 10.0.0.1 port 37802 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:23.518799 sshd[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:23.523337 systemd-logind[1532]: New session 24 of user core. Dec 13 01:12:23.532607 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:12:23.652460 sshd[6539]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:23.656361 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:37802.service: Deactivated successfully. Dec 13 01:12:23.658589 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:12:23.658623 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:12:23.659979 systemd-logind[1532]: Removed session 24. Dec 13 01:12:27.032495 kubelet[2731]: E1213 01:12:27.032435 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:28.033089 kubelet[2731]: E1213 01:12:28.033050 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:28.663841 systemd[1]: Started sshd@24-10.0.0.74:22-10.0.0.1:59046.service - OpenSSH per-connection server daemon (10.0.0.1:59046). Dec 13 01:12:28.701730 sshd[6562]: Accepted publickey for core from 10.0.0.1 port 59046 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:28.703863 sshd[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:28.708222 systemd-logind[1532]: New session 25 of user core. Dec 13 01:12:28.712615 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:12:28.833812 sshd[6562]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:28.837763 systemd[1]: sshd@24-10.0.0.74:22-10.0.0.1:59046.service: Deactivated successfully. Dec 13 01:12:28.840331 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:12:28.841190 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:12:28.842173 systemd-logind[1532]: Removed session 25. Dec 13 01:12:33.843658 systemd[1]: Started sshd@25-10.0.0.74:22-10.0.0.1:59058.service - OpenSSH per-connection server daemon (10.0.0.1:59058). Dec 13 01:12:33.877068 sshd[6577]: Accepted publickey for core from 10.0.0.1 port 59058 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:33.878838 sshd[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:33.883182 systemd-logind[1532]: New session 26 of user core. Dec 13 01:12:33.889632 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:12:34.014142 sshd[6577]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:34.021730 systemd[1]: sshd@25-10.0.0.74:22-10.0.0.1:59058.service: Deactivated successfully. Dec 13 01:12:34.023176 systemd-logind[1532]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:12:34.028301 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:12:34.029252 systemd-logind[1532]: Removed session 26. Dec 13 01:12:34.033691 kubelet[2731]: E1213 01:12:34.033659 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:34.044218 kubelet[2731]: E1213 01:12:34.044168 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:39.021623 systemd[1]: Started sshd@26-10.0.0.74:22-10.0.0.1:33322.service - OpenSSH per-connection server daemon (10.0.0.1:33322). Dec 13 01:12:39.058246 sshd[6615]: Accepted publickey for core from 10.0.0.1 port 33322 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:39.060087 sshd[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:39.064464 systemd-logind[1532]: New session 27 of user core. Dec 13 01:12:39.073775 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:12:39.190405 sshd[6615]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:39.194552 systemd[1]: sshd@26-10.0.0.74:22-10.0.0.1:33322.service: Deactivated successfully. Dec 13 01:12:39.197521 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:12:39.198223 systemd-logind[1532]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:12:39.199204 systemd-logind[1532]: Removed session 27. Dec 13 01:12:44.207682 systemd[1]: Started sshd@27-10.0.0.74:22-10.0.0.1:33336.service - OpenSSH per-connection server daemon (10.0.0.1:33336). Dec 13 01:12:44.235682 sshd[6633]: Accepted publickey for core from 10.0.0.1 port 33336 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:44.237457 sshd[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:44.241408 systemd-logind[1532]: New session 28 of user core. Dec 13 01:12:44.249646 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:12:44.350122 sshd[6633]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:44.353778 systemd[1]: sshd@27-10.0.0.74:22-10.0.0.1:33336.service: Deactivated successfully. Dec 13 01:12:44.355899 systemd-logind[1532]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:12:44.355963 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:12:44.356966 systemd-logind[1532]: Removed session 28. Dec 13 01:12:46.033323 kubelet[2731]: E1213 01:12:46.033288 2731 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:49.359574 systemd[1]: Started sshd@28-10.0.0.74:22-10.0.0.1:33180.service - OpenSSH per-connection server daemon (10.0.0.1:33180). Dec 13 01:12:49.387267 sshd[6657]: Accepted publickey for core from 10.0.0.1 port 33180 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:49.388834 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:49.393662 systemd-logind[1532]: New session 29 of user core. Dec 13 01:12:49.402648 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:12:49.511689 sshd[6657]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:49.515991 systemd[1]: sshd@28-10.0.0.74:22-10.0.0.1:33180.service: Deactivated successfully. Dec 13 01:12:49.518818 systemd-logind[1532]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:12:49.518838 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:12:49.520100 systemd-logind[1532]: Removed session 29.