Dec 13 01:06:54.880600 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:06:54.880628 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:06:54.880642 kernel: BIOS-provided physical RAM map: Dec 13 01:06:54.880651 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:06:54.880659 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:06:54.880667 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:06:54.880677 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:06:54.880685 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:06:54.880693 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:06:54.880705 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:06:54.880713 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:06:54.880736 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:06:54.880745 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:06:54.880753 kernel: NX (Execute Disable) protection: active Dec 13 01:06:54.880763 kernel: APIC: Static calls initialized Dec 13 01:06:54.880775 kernel: SMBIOS 2.8 present. Dec 13 01:06:54.880785 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:06:54.880794 kernel: Hypervisor detected: KVM Dec 13 01:06:54.880803 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:06:54.880812 kernel: kvm-clock: using sched offset of 2215673809 cycles Dec 13 01:06:54.880822 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:06:54.880832 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:06:54.880842 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:06:54.880851 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:06:54.880864 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:06:54.880873 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:06:54.880883 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:06:54.880892 kernel: Using GB pages for direct mapping Dec 13 01:06:54.880901 kernel: ACPI: Early table checksum verification disabled Dec 13 01:06:54.880911 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:06:54.880920 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:06:54.880929 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:06:54.880938 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:06:54.880951 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:06:54.880960 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:06:54.880969 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:06:54.880978 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:06:54.880988 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:06:54.880997 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:06:54.881007 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:06:54.881020 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:06:54.881032 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:06:54.881042 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:06:54.881051 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:06:54.881061 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:06:54.881070 kernel: No NUMA configuration found Dec 13 01:06:54.881080 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:06:54.881092 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:06:54.881102 kernel: Zone ranges: Dec 13 01:06:54.881112 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:06:54.881122 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:06:54.881131 kernel: Normal empty Dec 13 01:06:54.881141 kernel: Movable zone start for each node Dec 13 01:06:54.881151 kernel: Early memory node ranges Dec 13 01:06:54.881160 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:06:54.881170 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:06:54.881180 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:06:54.881192 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:06:54.881202 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:06:54.881211 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:06:54.881221 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:06:54.881230 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:06:54.881240 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:06:54.881250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:06:54.881260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:06:54.881269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:06:54.881291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:06:54.881300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:06:54.881310 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:06:54.881320 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:06:54.881330 kernel: TSC deadline timer available Dec 13 01:06:54.881339 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:06:54.881349 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:06:54.881359 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:06:54.881368 kernel: kvm-guest: setup PV sched yield Dec 13 01:06:54.881381 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:06:54.881391 kernel: Booting paravirtualized kernel on KVM Dec 13 01:06:54.881401 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:06:54.881411 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:06:54.881421 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:06:54.881431 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:06:54.881440 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:06:54.881450 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:06:54.881460 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:06:54.881474 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:06:54.881486 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:06:54.881497 kernel: random: crng init done Dec 13 01:06:54.881509 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:06:54.881519 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:06:54.881528 kernel: Fallback order for Node 0: 0 Dec 13 01:06:54.881539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:06:54.881548 kernel: Policy zone: DMA32 Dec 13 01:06:54.881561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:06:54.881571 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:06:54.881581 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:06:54.881591 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:06:54.881601 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:06:54.881610 kernel: Dynamic Preempt: voluntary Dec 13 01:06:54.881620 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:06:54.881631 kernel: rcu: RCU event tracing is enabled. Dec 13 01:06:54.881641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:06:54.881654 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:06:54.881664 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:06:54.881674 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:06:54.881684 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:06:54.881694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:06:54.881704 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:06:54.881714 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:06:54.881746 kernel: Console: colour VGA+ 80x25 Dec 13 01:06:54.881755 kernel: printk: console [ttyS0] enabled Dec 13 01:06:54.881768 kernel: ACPI: Core revision 20230628 Dec 13 01:06:54.881778 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:06:54.881788 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:06:54.881798 kernel: x2apic enabled Dec 13 01:06:54.881807 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:06:54.881817 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:06:54.881827 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:06:54.881837 kernel: kvm-guest: setup PV IPIs Dec 13 01:06:54.881858 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:06:54.881868 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:06:54.881878 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:06:54.881888 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:06:54.881901 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:06:54.881911 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:06:54.881922 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:06:54.881932 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:06:54.881942 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:06:54.881956 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:06:54.881966 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:06:54.881976 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:06:54.881986 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:06:54.881995 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:06:54.882004 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:06:54.882015 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:06:54.882022 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:06:54.882032 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:06:54.882040 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:06:54.882048 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:06:54.882055 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:06:54.882063 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:06:54.882070 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:06:54.882077 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:06:54.882085 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:06:54.882092 kernel: landlock: Up and running. Dec 13 01:06:54.882101 kernel: SELinux: Initializing. Dec 13 01:06:54.882109 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:06:54.882116 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:06:54.882124 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:06:54.882132 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:06:54.882139 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:06:54.882147 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:06:54.882154 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:06:54.882162 kernel: ... version: 0 Dec 13 01:06:54.882171 kernel: ... bit width: 48 Dec 13 01:06:54.882179 kernel: ... generic registers: 6 Dec 13 01:06:54.882186 kernel: ... value mask: 0000ffffffffffff Dec 13 01:06:54.882194 kernel: ... max period: 00007fffffffffff Dec 13 01:06:54.882201 kernel: ... fixed-purpose events: 0 Dec 13 01:06:54.882208 kernel: ... event mask: 000000000000003f Dec 13 01:06:54.882215 kernel: signal: max sigframe size: 1776 Dec 13 01:06:54.882223 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:06:54.882230 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:06:54.882240 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:06:54.882247 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:06:54.882255 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:06:54.882262 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:06:54.882269 kernel: smpboot: Max logical packages: 1 Dec 13 01:06:54.882286 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:06:54.882294 kernel: devtmpfs: initialized Dec 13 01:06:54.882301 kernel: x86/mm: Memory block size: 128MB Dec 13 01:06:54.882309 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:06:54.882319 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:06:54.882326 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:06:54.882334 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:06:54.882341 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:06:54.882349 kernel: audit: type=2000 audit(1734052015.031:1): state=initialized audit_enabled=0 res=1 Dec 13 01:06:54.882356 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:06:54.882363 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:06:54.882371 kernel: cpuidle: using governor menu Dec 13 01:06:54.882378 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:06:54.882387 kernel: dca service started, version 1.12.1 Dec 13 01:06:54.882395 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:06:54.882402 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:06:54.882410 kernel: PCI: Using configuration type 1 for base access Dec 13 01:06:54.882417 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:06:54.882425 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:06:54.882432 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:06:54.882440 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:06:54.882447 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:06:54.882457 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:06:54.882464 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:06:54.882471 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:06:54.882479 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:06:54.882486 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:06:54.882494 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:06:54.882501 kernel: ACPI: Interpreter enabled Dec 13 01:06:54.882508 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:06:54.882516 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:06:54.882526 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:06:54.882533 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:06:54.882540 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:06:54.882548 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:06:54.882750 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:06:54.882882 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:06:54.883003 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:06:54.883016 kernel: PCI host bridge to bus 0000:00 Dec 13 01:06:54.883141 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:06:54.883253 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:06:54.883373 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:06:54.883484 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:06:54.883594 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:06:54.883703 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:06:54.883895 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:06:54.884035 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:06:54.884173 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:06:54.884305 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:06:54.884426 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:06:54.884546 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:06:54.884666 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:06:54.884822 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:06:54.884944 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:06:54.885064 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:06:54.885184 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:06:54.885323 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:06:54.885445 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:06:54.885570 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:06:54.885695 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:06:54.885866 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:06:54.885987 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:06:54.886105 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:06:54.886224 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:06:54.886352 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:06:54.886479 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:06:54.886602 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:06:54.886741 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:06:54.886865 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:06:54.886984 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:06:54.887111 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:06:54.887233 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:06:54.887246 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:06:54.887254 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:06:54.887262 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:06:54.887270 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:06:54.887285 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:06:54.887293 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:06:54.887301 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:06:54.887309 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:06:54.887316 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:06:54.887326 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:06:54.887333 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:06:54.887341 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:06:54.887348 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:06:54.887356 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:06:54.887363 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:06:54.887371 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:06:54.887378 kernel: iommu: Default domain type: Translated Dec 13 01:06:54.887385 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:06:54.887395 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:06:54.887402 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:06:54.887410 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:06:54.887417 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:06:54.887545 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:06:54.887664 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:06:54.887846 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:06:54.887857 kernel: vgaarb: loaded Dec 13 01:06:54.887868 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:06:54.887876 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:06:54.887883 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:06:54.887891 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:06:54.887899 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:06:54.887906 kernel: pnp: PnP ACPI init Dec 13 01:06:54.888038 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:06:54.888049 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:06:54.888060 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:06:54.888067 kernel: NET: Registered PF_INET protocol family Dec 13 01:06:54.888075 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:06:54.888082 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:06:54.888090 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:06:54.888098 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:06:54.888105 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:06:54.888112 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:06:54.888120 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:06:54.888130 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:06:54.888137 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:06:54.888154 kernel: NET: Registered PF_XDP protocol family Dec 13 01:06:54.888267 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:06:54.888386 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:06:54.888495 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:06:54.888603 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:06:54.888710 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:06:54.888834 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:06:54.888848 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:06:54.888856 kernel: Initialise system trusted keyrings Dec 13 01:06:54.888864 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:06:54.888872 kernel: Key type asymmetric registered Dec 13 01:06:54.888879 kernel: Asymmetric key parser 'x509' registered Dec 13 01:06:54.888886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:06:54.888894 kernel: io scheduler mq-deadline registered Dec 13 01:06:54.888901 kernel: io scheduler kyber registered Dec 13 01:06:54.888909 kernel: io scheduler bfq registered Dec 13 01:06:54.888918 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:06:54.888926 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:06:54.888934 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:06:54.888942 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:06:54.888949 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:06:54.888957 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:06:54.888964 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:06:54.888972 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:06:54.888979 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:06:54.889103 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:06:54.889114 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:06:54.889226 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:06:54.889347 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:06:54 UTC (1734052014) Dec 13 01:06:54.889460 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:06:54.889470 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:06:54.889478 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:06:54.889488 kernel: Segment Routing with IPv6 Dec 13 01:06:54.889496 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:06:54.889505 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:06:54.889513 kernel: Key type dns_resolver registered Dec 13 01:06:54.889522 kernel: IPI shorthand broadcast: enabled Dec 13 01:06:54.889531 kernel: sched_clock: Marking stable (615002732, 106903454)->(737201917, -15295731) Dec 13 01:06:54.889538 kernel: registered taskstats version 1 Dec 13 01:06:54.889546 kernel: Loading compiled-in X.509 certificates Dec 13 01:06:54.889553 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:06:54.889563 kernel: Key type .fscrypt registered Dec 13 01:06:54.889570 kernel: Key type fscrypt-provisioning registered Dec 13 01:06:54.889577 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:06:54.889585 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:06:54.889592 kernel: ima: No architecture policies found Dec 13 01:06:54.889600 kernel: clk: Disabling unused clocks Dec 13 01:06:54.889607 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:06:54.889614 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:06:54.889622 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:06:54.889631 kernel: Run /init as init process Dec 13 01:06:54.889639 kernel: with arguments: Dec 13 01:06:54.889647 kernel: /init Dec 13 01:06:54.889654 kernel: with environment: Dec 13 01:06:54.889661 kernel: HOME=/ Dec 13 01:06:54.889668 kernel: TERM=linux Dec 13 01:06:54.889676 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:06:54.889685 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:06:54.889697 systemd[1]: Detected virtualization kvm. Dec 13 01:06:54.889705 systemd[1]: Detected architecture x86-64. Dec 13 01:06:54.889713 systemd[1]: Running in initrd. Dec 13 01:06:54.889755 systemd[1]: No hostname configured, using default hostname. Dec 13 01:06:54.889763 systemd[1]: Hostname set to . Dec 13 01:06:54.889772 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:06:54.889780 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:06:54.889788 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:06:54.889799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:06:54.889809 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:06:54.889828 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:06:54.889838 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:06:54.889847 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:06:54.889859 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:06:54.889867 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:06:54.889876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:06:54.889884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:06:54.889893 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:06:54.889901 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:06:54.889909 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:06:54.889917 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:06:54.889928 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:06:54.889936 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:06:54.889944 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:06:54.889953 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:06:54.889961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:06:54.889969 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:06:54.889977 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:06:54.889986 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:06:54.889994 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:06:54.890004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:06:54.890012 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:06:54.890021 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:06:54.890029 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:06:54.890037 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:06:54.890045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:06:54.890054 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:06:54.890062 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:06:54.890072 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:06:54.890099 systemd-journald[192]: Collecting audit messages is disabled. Dec 13 01:06:54.890120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:06:54.890131 systemd-journald[192]: Journal started Dec 13 01:06:54.890151 systemd-journald[192]: Runtime Journal (/run/log/journal/0286af54a97c434c9cb28383f061f4a8) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:06:54.874828 systemd-modules-load[193]: Inserted module 'overlay' Dec 13 01:06:54.909756 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:06:54.909771 kernel: Bridge firewalling registered Dec 13 01:06:54.901652 systemd-modules-load[193]: Inserted module 'br_netfilter' Dec 13 01:06:54.912499 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:06:54.912782 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:06:54.914139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:06:54.916318 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:06:54.929211 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:06:54.931257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:06:54.934847 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:06:54.936605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:06:54.945914 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:06:54.948779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:06:54.951056 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:06:54.953730 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:06:54.968852 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:06:54.972103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:06:54.983403 dracut-cmdline[227]: dracut-dracut-053 Dec 13 01:06:54.986938 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:06:55.004731 systemd-resolved[230]: Positive Trust Anchors: Dec 13 01:06:55.004745 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:06:55.004776 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:06:55.007188 systemd-resolved[230]: Defaulting to hostname 'linux'. Dec 13 01:06:55.008223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:06:55.014174 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:06:55.083757 kernel: SCSI subsystem initialized Dec 13 01:06:55.092762 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:06:55.103765 kernel: iscsi: registered transport (tcp) Dec 13 01:06:55.124751 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:06:55.124777 kernel: QLogic iSCSI HBA Driver Dec 13 01:06:55.174191 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:06:55.181955 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:06:55.207980 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:06:55.208005 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:06:55.209022 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:06:55.250758 kernel: raid6: avx2x4 gen() 27011 MB/s Dec 13 01:06:55.267747 kernel: raid6: avx2x2 gen() 26619 MB/s Dec 13 01:06:55.284874 kernel: raid6: avx2x1 gen() 25130 MB/s Dec 13 01:06:55.284903 kernel: raid6: using algorithm avx2x4 gen() 27011 MB/s Dec 13 01:06:55.302822 kernel: raid6: .... xor() 7669 MB/s, rmw enabled Dec 13 01:06:55.302843 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:06:55.323752 kernel: xor: automatically using best checksumming function avx Dec 13 01:06:55.477752 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:06:55.491087 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:06:55.498887 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:06:55.511410 systemd-udevd[413]: Using default interface naming scheme 'v255'. Dec 13 01:06:55.516005 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:06:55.523908 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:06:55.537157 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Dec 13 01:06:55.568060 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:06:55.577873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:06:55.644307 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:06:55.654896 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:06:55.666943 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:06:55.670012 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:06:55.672545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:06:55.675259 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:06:55.678740 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:06:55.707564 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:06:55.707715 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:06:55.707745 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:06:55.707756 kernel: GPT:9289727 != 19775487 Dec 13 01:06:55.707773 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:06:55.707783 kernel: GPT:9289727 != 19775487 Dec 13 01:06:55.707793 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:06:55.707803 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:06:55.707813 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:06:55.707824 kernel: libata version 3.00 loaded. Dec 13 01:06:55.707834 kernel: AES CTR mode by8 optimization enabled Dec 13 01:06:55.685842 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:06:55.695845 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:06:55.718077 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:06:55.734793 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:06:55.734811 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:06:55.734978 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:06:55.735117 kernel: scsi host0: ahci Dec 13 01:06:55.735282 kernel: scsi host1: ahci Dec 13 01:06:55.735435 kernel: scsi host2: ahci Dec 13 01:06:55.735579 kernel: scsi host3: ahci Dec 13 01:06:55.735736 kernel: scsi host4: ahci Dec 13 01:06:55.735883 kernel: scsi host5: ahci Dec 13 01:06:55.736028 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:06:55.736039 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:06:55.736050 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:06:55.736059 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:06:55.736069 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:06:55.736079 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:06:55.719418 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:06:55.744501 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (467) Dec 13 01:06:55.744527 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) Dec 13 01:06:55.719524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:06:55.721814 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:06:55.723061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:06:55.723183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:06:55.724503 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:06:55.730404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:06:55.757119 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:06:55.767852 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:06:55.801553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:06:55.807350 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:06:55.808811 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:06:55.815937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:06:55.825899 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:06:55.828028 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:06:55.837022 disk-uuid[556]: Primary Header is updated. Dec 13 01:06:55.837022 disk-uuid[556]: Secondary Entries is updated. Dec 13 01:06:55.837022 disk-uuid[556]: Secondary Header is updated. Dec 13 01:06:55.841754 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:06:55.846756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:06:55.852342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:06:56.046153 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:06:56.046244 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:06:56.046257 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:06:56.047745 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:06:56.048750 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:06:56.048821 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:06:56.049749 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:06:56.050967 kernel: ata3.00: applying bridge limits Dec 13 01:06:56.050985 kernel: ata3.00: configured for UDMA/100 Dec 13 01:06:56.051750 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:06:56.103768 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:06:56.117541 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:06:56.117556 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:06:56.846748 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:06:56.847244 disk-uuid[557]: The operation has completed successfully. Dec 13 01:06:56.873979 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:06:56.874133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:06:56.913015 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:06:56.916543 sh[592]: Success Dec 13 01:06:56.930833 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:06:56.966540 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:06:56.977308 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:06:56.982144 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:06:56.992272 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:06:56.992320 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:06:56.992332 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:06:56.993456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:06:56.994319 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:06:56.998920 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:06:56.999890 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:06:57.009019 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:06:57.011258 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:06:57.020044 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:06:57.020097 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:06:57.020111 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:06:57.023757 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:06:57.032430 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:06:57.034437 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:06:57.043307 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:06:57.052942 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:06:57.155114 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:06:57.166954 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:06:57.178865 ignition[685]: Ignition 2.19.0 Dec 13 01:06:57.178877 ignition[685]: Stage: fetch-offline Dec 13 01:06:57.178920 ignition[685]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:06:57.178932 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:06:57.179050 ignition[685]: parsed url from cmdline: "" Dec 13 01:06:57.179054 ignition[685]: no config URL provided Dec 13 01:06:57.179060 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:06:57.179071 ignition[685]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:06:57.179103 ignition[685]: op(1): [started] loading QEMU firmware config module Dec 13 01:06:57.179110 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:06:57.192567 ignition[685]: op(1): [finished] loading QEMU firmware config module Dec 13 01:06:57.194176 systemd-networkd[778]: lo: Link UP Dec 13 01:06:57.194187 systemd-networkd[778]: lo: Gained carrier Dec 13 01:06:57.196101 systemd-networkd[778]: Enumeration completed Dec 13 01:06:57.196237 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:06:57.196549 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:06:57.196554 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:06:57.196873 systemd[1]: Reached target network.target - Network. Dec 13 01:06:57.197396 systemd-networkd[778]: eth0: Link UP Dec 13 01:06:57.197400 systemd-networkd[778]: eth0: Gained carrier Dec 13 01:06:57.197408 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:06:57.221795 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:06:57.253857 ignition[685]: parsing config with SHA512: 3325c1936fa6139cf1101fabb5b6a3233ab8e38ff5cb14a125fbbabcb5af025b1dfab5a2ed3d74b2f7eacbb4f6e154300c391fdad2d99f58ed1a81a1fb87ca89 Dec 13 01:06:57.260936 unknown[685]: fetched base config from "system" Dec 13 01:06:57.260958 unknown[685]: fetched user config from "qemu" Dec 13 01:06:57.261414 ignition[685]: fetch-offline: fetch-offline passed Dec 13 01:06:57.261495 ignition[685]: Ignition finished successfully Dec 13 01:06:57.263034 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.43 Dec 13 01:06:57.263897 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Dec 13 01:06:57.265036 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:06:57.265498 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:06:57.272001 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:06:57.290395 ignition[784]: Ignition 2.19.0 Dec 13 01:06:57.290408 ignition[784]: Stage: kargs Dec 13 01:06:57.290611 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:06:57.290623 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:06:57.291632 ignition[784]: kargs: kargs passed Dec 13 01:06:57.291682 ignition[784]: Ignition finished successfully Dec 13 01:06:57.298886 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:06:57.306959 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:06:57.323193 ignition[792]: Ignition 2.19.0 Dec 13 01:06:57.323213 ignition[792]: Stage: disks Dec 13 01:06:57.323474 ignition[792]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:06:57.323487 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:06:57.327538 ignition[792]: disks: disks passed Dec 13 01:06:57.327592 ignition[792]: Ignition finished successfully Dec 13 01:06:57.331036 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:06:57.333156 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:06:57.333568 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:06:57.335598 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:06:57.338004 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:06:57.339701 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:06:57.350932 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:06:57.363238 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:06:57.369684 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:06:57.390912 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:06:57.476742 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:06:57.477157 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:06:57.478128 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:06:57.495901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:06:57.497742 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:06:57.498501 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:06:57.498552 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:06:57.508571 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Dec 13 01:06:57.508599 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:06:57.508613 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:06:57.508627 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:06:57.498578 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:06:57.510749 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:06:57.512542 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:06:57.530690 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:06:57.533528 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:06:57.575140 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:06:57.579963 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:06:57.584489 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:06:57.589538 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:06:57.668283 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:06:57.679817 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:06:57.681231 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:06:57.691756 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:06:57.705165 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:06:57.725375 ignition[923]: INFO : Ignition 2.19.0 Dec 13 01:06:57.725375 ignition[923]: INFO : Stage: mount Dec 13 01:06:57.727267 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:06:57.727267 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:06:57.730395 ignition[923]: INFO : mount: mount passed Dec 13 01:06:57.731233 ignition[923]: INFO : Ignition finished successfully Dec 13 01:06:57.733289 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:06:57.744900 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:06:57.991715 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:06:58.003939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:06:58.011679 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (935) Dec 13 01:06:58.011741 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:06:58.011757 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:06:58.013191 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:06:58.015764 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:06:58.016978 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:06:58.047333 ignition[954]: INFO : Ignition 2.19.0 Dec 13 01:06:58.047333 ignition[954]: INFO : Stage: files Dec 13 01:06:58.049221 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:06:58.049221 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:06:58.049221 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:06:58.052938 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:06:58.052938 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:06:58.056202 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:06:58.056202 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:06:58.059264 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:06:58.056447 unknown[954]: wrote ssh authorized keys file for user: core Dec 13 01:06:58.062051 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:06:58.062051 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:06:58.099185 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:06:58.202676 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:06:58.205110 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:06:58.574976 systemd-networkd[778]: eth0: Gained IPv6LL Dec 13 01:06:58.644131 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:07:00.009782 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:07:00.009782 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:07:00.013942 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:07:00.013942 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:07:00.013942 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:07:00.013942 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:07:00.013942 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:07:00.013942 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:07:00.013942 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:07:00.013942 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:07:00.038765 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:07:00.048152 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:07:00.049750 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:07:00.049750 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:07:00.049750 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:07:00.049750 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:07:00.049750 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:07:00.049750 ignition[954]: INFO : files: files passed Dec 13 01:07:00.049750 ignition[954]: INFO : Ignition finished successfully Dec 13 01:07:00.050977 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:07:00.064056 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:07:00.067145 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:07:00.068965 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:07:00.069100 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:07:00.077629 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:07:00.080493 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:07:00.082285 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:07:00.083903 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:07:00.083494 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:07:00.085991 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:07:00.097986 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:07:00.128368 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:07:00.128540 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:07:00.130899 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:07:00.133184 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:07:00.133685 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:07:00.134625 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:07:00.154618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:07:00.169994 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:07:00.180174 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:07:00.182554 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:07:00.192046 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:07:00.193918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:07:00.194068 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:07:00.196439 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:07:00.198333 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:07:00.200579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:07:00.202927 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:07:00.205115 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:07:00.207472 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:07:00.209743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:07:00.212164 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:07:00.214429 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:07:00.216673 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:07:00.218490 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:07:00.218638 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:07:00.220774 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:07:00.222348 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:07:00.224421 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:07:00.224531 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:07:00.226610 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:07:00.226749 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:07:00.228917 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:07:00.229029 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:07:00.231006 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:07:00.232778 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:07:00.236853 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:07:00.238459 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:07:00.240319 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:07:00.242125 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:07:00.242248 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:07:00.244158 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:07:00.244290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:07:00.246576 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:07:00.246715 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:07:00.248610 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:07:00.248748 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:07:00.258059 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:07:00.260317 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:07:00.261255 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:07:00.261414 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:07:00.263550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:07:00.263794 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:07:00.268561 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:07:00.268692 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:07:00.272132 ignition[1008]: INFO : Ignition 2.19.0 Dec 13 01:07:00.272132 ignition[1008]: INFO : Stage: umount Dec 13 01:07:00.274003 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:00.274003 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:00.274003 ignition[1008]: INFO : umount: umount passed Dec 13 01:07:00.278949 ignition[1008]: INFO : Ignition finished successfully Dec 13 01:07:00.275699 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:07:00.275887 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:07:00.277971 systemd[1]: Stopped target network.target - Network. Dec 13 01:07:00.278980 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:07:00.279047 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:07:00.280979 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:07:00.281039 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:07:00.282900 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:07:00.282959 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:07:00.284908 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:07:00.284960 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:07:00.287034 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:07:00.289105 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:07:00.292121 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:07:00.292773 systemd-networkd[778]: eth0: DHCPv6 lease lost Dec 13 01:07:00.295020 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:07:00.295158 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:07:00.297477 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:07:00.297523 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:07:00.306934 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:07:00.308393 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:07:00.308482 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:07:00.311147 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:07:00.313660 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:07:00.313813 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:07:00.318354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:07:00.318447 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:07:00.320348 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:07:00.320397 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:07:00.322108 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:07:00.322157 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:07:00.336265 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:07:00.336461 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:07:00.339638 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:07:00.339708 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:07:00.341266 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:07:00.341311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:07:00.343297 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:07:00.343348 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:07:00.345692 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:07:00.345752 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:07:00.347621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:07:00.347672 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:07:00.354866 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:07:00.356735 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:07:00.356797 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:07:00.359103 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:07:00.359153 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:07:00.361493 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:07:00.361541 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:07:00.362806 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:07:00.362854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:00.365612 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:07:00.365748 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:07:00.367563 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:07:00.367667 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:07:00.452812 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:07:00.452961 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:07:00.453642 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:07:00.456008 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:07:00.456065 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:07:00.467050 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:07:00.474288 systemd[1]: Switching root. Dec 13 01:07:00.510334 systemd-journald[192]: Journal stopped Dec 13 01:07:01.678415 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Dec 13 01:07:01.678487 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:07:01.678505 kernel: SELinux: policy capability open_perms=1 Dec 13 01:07:01.678517 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:07:01.678528 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:07:01.678539 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:07:01.678550 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:07:01.678565 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:07:01.678577 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:07:01.678588 kernel: audit: type=1403 audit(1734052020.852:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:07:01.678600 systemd[1]: Successfully loaded SELinux policy in 44.028ms. Dec 13 01:07:01.678621 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.019ms. Dec 13 01:07:01.678634 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:07:01.678648 systemd[1]: Detected virtualization kvm. Dec 13 01:07:01.678659 systemd[1]: Detected architecture x86-64. Dec 13 01:07:01.678674 systemd[1]: Detected first boot. Dec 13 01:07:01.678686 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:07:01.678698 zram_generator::config[1053]: No configuration found. Dec 13 01:07:01.678715 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:07:01.678740 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:07:01.678752 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:07:01.678764 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:07:01.678777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:07:01.678789 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:07:01.678803 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:07:01.678815 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:07:01.678827 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:07:01.678839 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:07:01.678851 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:07:01.678862 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:07:01.678874 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:07:01.678886 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:07:01.678901 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:07:01.678913 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:07:01.678926 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:07:01.678939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:07:01.678950 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:07:01.678962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:07:01.678974 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:07:01.678986 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:07:01.678998 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:07:01.679012 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:07:01.679024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:07:01.679036 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:07:01.679048 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:07:01.679060 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:07:01.679072 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:07:01.679084 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:07:01.679095 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:07:01.679110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:07:01.679122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:07:01.679133 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:07:01.679146 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:07:01.679158 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:07:01.679172 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:07:01.679185 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:01.679197 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:07:01.679210 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:07:01.679223 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:07:01.679244 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:07:01.679256 systemd[1]: Reached target machines.target - Containers. Dec 13 01:07:01.679268 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:07:01.679281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:07:01.679293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:07:01.679305 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:07:01.679317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:07:01.679331 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:07:01.679343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:07:01.679355 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:07:01.679367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:07:01.679380 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:07:01.679392 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:07:01.679404 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:07:01.679416 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:07:01.679427 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:07:01.679441 kernel: loop: module loaded Dec 13 01:07:01.679453 kernel: fuse: init (API version 7.39) Dec 13 01:07:01.679465 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:07:01.679477 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:07:01.679488 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:07:01.679500 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:07:01.679512 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:07:01.679523 kernel: ACPI: bus type drm_connector registered Dec 13 01:07:01.679535 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:07:01.679549 systemd[1]: Stopped verity-setup.service. Dec 13 01:07:01.679579 systemd-journald[1126]: Collecting audit messages is disabled. Dec 13 01:07:01.679601 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:01.679613 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:07:01.679625 systemd-journald[1126]: Journal started Dec 13 01:07:01.679647 systemd-journald[1126]: Runtime Journal (/run/log/journal/0286af54a97c434c9cb28383f061f4a8) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:07:01.432488 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:07:01.447611 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:07:01.448103 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:07:01.680893 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:07:01.682879 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:07:01.684449 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:07:01.685555 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:07:01.686768 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:07:01.687990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:07:01.689346 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:07:01.691015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:07:01.692846 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:07:01.693066 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:07:01.694648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:07:01.694874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:07:01.696446 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:07:01.696665 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:07:01.698125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:07:01.698343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:07:01.700281 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:07:01.700506 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:07:01.702393 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:07:01.702620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:07:01.704133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:07:01.705652 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:07:01.707291 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:07:01.726559 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:07:01.731805 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:07:01.734062 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:07:01.735169 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:07:01.735201 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:07:01.737144 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:07:01.740026 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:07:01.743849 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:07:01.745095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:07:01.748447 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:07:01.753832 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:07:01.755017 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:07:01.762415 systemd-journald[1126]: Time spent on flushing to /var/log/journal/0286af54a97c434c9cb28383f061f4a8 is 39.408ms for 949 entries. Dec 13 01:07:01.762415 systemd-journald[1126]: System Journal (/var/log/journal/0286af54a97c434c9cb28383f061f4a8) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:07:01.829789 systemd-journald[1126]: Received client request to flush runtime journal. Dec 13 01:07:01.829835 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:07:01.760522 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:07:01.761687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:07:01.765221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:07:01.780059 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:07:01.783815 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:07:01.787580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:07:01.789293 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:07:01.790971 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:07:01.792869 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:07:01.794895 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:07:01.836266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:07:01.802678 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:07:01.816608 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:07:01.819497 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:07:01.836770 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:07:01.845600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:07:01.848941 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:07:01.858657 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:07:01.860086 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Dec 13 01:07:01.860105 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Dec 13 01:07:01.860273 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:07:01.870573 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:07:01.871910 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 01:07:01.879941 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:07:01.927770 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:07:01.948897 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:07:01.961910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:07:01.967746 kernel: loop3: detected capacity change from 0 to 211296 Dec 13 01:07:01.986552 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Dec 13 01:07:01.987071 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Dec 13 01:07:01.995771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:07:02.035775 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:07:02.048754 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:07:02.057401 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:07:02.058773 (sd-merge)[1193]: Merged extensions into '/usr'. Dec 13 01:07:02.064681 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:07:02.064701 systemd[1]: Reloading... Dec 13 01:07:02.160774 zram_generator::config[1217]: No configuration found. Dec 13 01:07:02.210756 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:07:02.303964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:07:02.355371 systemd[1]: Reloading finished in 290 ms. Dec 13 01:07:02.385170 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:07:02.386713 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:07:02.398892 systemd[1]: Starting ensure-sysext.service... Dec 13 01:07:02.401081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:07:02.410774 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:07:02.410787 systemd[1]: Reloading... Dec 13 01:07:02.438620 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:07:02.439137 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:07:02.440406 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:07:02.440838 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 13 01:07:02.440940 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 13 01:07:02.445232 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:07:02.445249 systemd-tmpfiles[1258]: Skipping /boot Dec 13 01:07:02.467747 zram_generator::config[1286]: No configuration found. Dec 13 01:07:02.479843 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:07:02.480044 systemd-tmpfiles[1258]: Skipping /boot Dec 13 01:07:02.687240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:07:02.736089 systemd[1]: Reloading finished in 324 ms. Dec 13 01:07:02.752987 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:07:02.754668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:07:02.770550 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:07:02.773075 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:07:02.775406 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:07:02.780498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:07:02.784623 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:07:02.788934 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:07:02.794414 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:02.794593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:07:02.797287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:07:02.800028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:07:02.803811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:07:02.805900 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:07:02.817177 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:07:02.818787 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:02.822497 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:07:02.825119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:07:02.825515 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:07:02.828764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:07:02.829009 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:07:02.829302 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Dec 13 01:07:02.831339 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:07:02.831519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:07:02.839573 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:07:02.844201 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:02.844412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:07:02.853813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:07:02.854680 augenrules[1355]: No rules Dec 13 01:07:02.857523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:07:02.860027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:07:02.867061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:07:02.868536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:07:02.873310 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:07:02.874701 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:07:02.875789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:07:02.878257 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:07:02.880410 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:07:02.882287 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:07:02.882508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:07:02.884331 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:07:02.884550 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:07:02.886362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:07:02.886577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:07:02.894325 systemd[1]: Finished ensure-sysext.service. Dec 13 01:07:02.905307 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:07:02.905615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:07:02.907524 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:07:02.927992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:07:02.930832 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:07:02.930929 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:07:02.941284 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:07:02.942691 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:07:02.943177 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:07:02.948765 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1387) Dec 13 01:07:02.954158 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1387) Dec 13 01:07:02.960767 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:07:03.027153 systemd-resolved[1328]: Positive Trust Anchors: Dec 13 01:07:03.030715 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:07:03.030775 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:07:03.037599 systemd-resolved[1328]: Defaulting to hostname 'linux'. Dec 13 01:07:03.037750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1370) Dec 13 01:07:03.041660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:07:03.043835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:07:03.074802 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:07:03.080147 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:07:03.092002 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:07:03.093482 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:07:03.106931 systemd-networkd[1395]: lo: Link UP Dec 13 01:07:03.106947 systemd-networkd[1395]: lo: Gained carrier Dec 13 01:07:03.114579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:07:03.134164 systemd-networkd[1395]: Enumeration completed Dec 13 01:07:03.140918 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:07:03.144084 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:07:03.144096 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:07:03.145545 systemd-networkd[1395]: eth0: Link UP Dec 13 01:07:03.145554 systemd-networkd[1395]: eth0: Gained carrier Dec 13 01:07:03.145573 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:07:03.146986 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:07:03.153678 systemd[1]: Reached target network.target - Network. Dec 13 01:07:03.153843 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:07:03.164524 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:07:03.164740 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:07:04.321929 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:07:04.322114 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:07:03.165984 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:07:03.168339 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Dec 13 01:07:04.318154 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:07:04.318195 systemd-timesyncd[1396]: Initial clock synchronization to Fri 2024-12-13 01:07:04.318062 UTC. Dec 13 01:07:04.320160 systemd-resolved[1328]: Clock change detected. Flushing caches. Dec 13 01:07:04.321318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:07:04.323506 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:07:04.374419 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:07:04.386820 kernel: kvm_amd: TSC scaling supported Dec 13 01:07:04.386939 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:07:04.386958 kernel: kvm_amd: Nested Paging enabled Dec 13 01:07:04.386973 kernel: kvm_amd: LBR virtualization supported Dec 13 01:07:04.388641 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:07:04.388781 kernel: kvm_amd: Virtual GIF supported Dec 13 01:07:04.411165 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:07:04.444205 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:07:04.459725 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:07:04.461477 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:04.469311 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:07:04.502912 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:07:04.504784 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:07:04.506140 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:07:04.507327 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:07:04.508595 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:07:04.510069 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:07:04.511270 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:07:04.512631 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:07:04.514095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:07:04.514120 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:07:04.515169 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:07:04.517160 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:07:04.520039 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:07:04.526458 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:07:04.529275 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:07:04.531081 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:07:04.532432 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:07:04.533593 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:07:04.534575 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:07:04.534599 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:07:04.535634 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:07:04.537665 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:07:04.541514 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:07:04.541492 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:07:04.546609 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:07:04.547745 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:07:04.548731 jq[1430]: false Dec 13 01:07:04.549229 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:07:04.557538 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:07:04.560555 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:07:04.563274 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:07:04.568488 extend-filesystems[1431]: Found loop3 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found loop4 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found loop5 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found sr0 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found vda Dec 13 01:07:04.568488 extend-filesystems[1431]: Found vda1 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found vda2 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found vda3 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found usr Dec 13 01:07:04.568488 extend-filesystems[1431]: Found vda4 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found vda6 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found vda7 Dec 13 01:07:04.568488 extend-filesystems[1431]: Found vda9 Dec 13 01:07:04.568488 extend-filesystems[1431]: Checking size of /dev/vda9 Dec 13 01:07:04.588477 extend-filesystems[1431]: Resized partition /dev/vda9 Dec 13 01:07:04.583651 dbus-daemon[1429]: [system] SELinux support is enabled Dec 13 01:07:04.575607 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:07:04.577177 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:07:04.577721 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:07:04.585274 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:07:04.588172 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:07:04.589747 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:07:04.591639 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:07:04.596323 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:07:04.599945 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:07:04.600276 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:07:04.600724 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:07:04.601045 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:07:04.604175 jq[1451]: true Dec 13 01:07:04.604983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:07:04.606549 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:07:04.611417 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:07:04.619494 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1363) Dec 13 01:07:04.632508 jq[1455]: true Dec 13 01:07:04.644784 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:07:04.646341 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:07:04.656947 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:07:04.656977 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:07:04.659624 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:07:04.659661 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:07:04.673728 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:07:04.673752 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:07:04.674605 systemd-logind[1444]: New seat seat0. Dec 13 01:07:04.677665 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:07:04.677665 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:07:04.677665 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:07:04.676905 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:07:04.683377 update_engine[1448]: I20241213 01:07:04.678989 1448 main.cc:92] Flatcar Update Engine starting Dec 13 01:07:04.683377 update_engine[1448]: I20241213 01:07:04.680325 1448 update_check_scheduler.cc:74] Next update check in 3m3s Dec 13 01:07:04.683750 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Dec 13 01:07:04.677157 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:07:04.680557 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:07:04.690235 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:07:04.691503 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:07:04.694067 tar[1454]: linux-amd64/helm Dec 13 01:07:04.715263 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:07:04.718612 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:07:04.721594 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:07:04.769836 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:07:04.779676 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:07:04.844786 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:07:04.939475 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:07:04.948707 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:07:04.955322 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:35080.service - OpenSSH per-connection server daemon (10.0.0.1:35080). Dec 13 01:07:04.958945 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:07:04.959156 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:07:04.967547 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:07:05.017836 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:07:05.032550 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:07:05.036259 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:07:05.038385 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:07:05.090193 sshd[1507]: Accepted publickey for core from 10.0.0.1 port 35080 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:05.092365 sshd[1507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:05.107772 systemd-logind[1444]: New session 1 of user core. Dec 13 01:07:05.109052 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:07:05.118660 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:07:05.174517 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:07:05.191859 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:07:05.198227 (systemd)[1518]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:07:05.200415 containerd[1461]: time="2024-12-13T01:07:05.200290758Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:07:05.236867 containerd[1461]: time="2024-12-13T01:07:05.236800477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:05.239332 containerd[1461]: time="2024-12-13T01:07:05.239263816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:05.239332 containerd[1461]: time="2024-12-13T01:07:05.239318048Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:07:05.239491 containerd[1461]: time="2024-12-13T01:07:05.239347533Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:07:05.239632 containerd[1461]: time="2024-12-13T01:07:05.239606158Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:07:05.239694 containerd[1461]: time="2024-12-13T01:07:05.239638028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:05.239797 containerd[1461]: time="2024-12-13T01:07:05.239768172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:05.239797 containerd[1461]: time="2024-12-13T01:07:05.239790724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:05.240106 containerd[1461]: time="2024-12-13T01:07:05.240057374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:05.240106 containerd[1461]: time="2024-12-13T01:07:05.240085718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:05.240106 containerd[1461]: time="2024-12-13T01:07:05.240102960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:05.240106 containerd[1461]: time="2024-12-13T01:07:05.240114912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:05.240267 containerd[1461]: time="2024-12-13T01:07:05.240228976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:05.240571 containerd[1461]: time="2024-12-13T01:07:05.240542705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:07:05.240740 containerd[1461]: time="2024-12-13T01:07:05.240712002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:07:05.240740 containerd[1461]: time="2024-12-13T01:07:05.240735747Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:07:05.240866 containerd[1461]: time="2024-12-13T01:07:05.240848057Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:07:05.241202 containerd[1461]: time="2024-12-13T01:07:05.240929149Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:07:05.353078 tar[1454]: linux-amd64/LICENSE Dec 13 01:07:05.353212 tar[1454]: linux-amd64/README.md Dec 13 01:07:05.369493 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:07:05.411313 systemd[1518]: Queued start job for default target default.target. Dec 13 01:07:05.423653 systemd[1518]: Created slice app.slice - User Application Slice. Dec 13 01:07:05.423679 systemd[1518]: Reached target paths.target - Paths. Dec 13 01:07:05.423692 systemd[1518]: Reached target timers.target - Timers. Dec 13 01:07:05.425284 systemd[1518]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:07:05.436833 systemd[1518]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:07:05.436967 systemd[1518]: Reached target sockets.target - Sockets. Dec 13 01:07:05.436986 systemd[1518]: Reached target basic.target - Basic System. Dec 13 01:07:05.437023 systemd[1518]: Reached target default.target - Main User Target. Dec 13 01:07:05.437056 systemd[1518]: Startup finished in 230ms. Dec 13 01:07:05.437610 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:07:05.439504 containerd[1461]: time="2024-12-13T01:07:05.439454215Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:07:05.439595 containerd[1461]: time="2024-12-13T01:07:05.439538383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:07:05.439595 containerd[1461]: time="2024-12-13T01:07:05.439556997Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:07:05.439595 containerd[1461]: time="2024-12-13T01:07:05.439573679Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:07:05.439595 containerd[1461]: time="2024-12-13T01:07:05.439587024Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:07:05.439794 containerd[1461]: time="2024-12-13T01:07:05.439768694Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:07:05.440184 containerd[1461]: time="2024-12-13T01:07:05.440135813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:07:05.440176 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440356627Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440374480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440388577Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440423803Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440438581Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440454491Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440469358Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440485168Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440498794Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440511688Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440524772Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440552264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440567943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441245 containerd[1461]: time="2024-12-13T01:07:05.440580487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440593311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440605894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440622956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440636943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440651119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440663372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440678631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440689912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440702305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440739856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440758701Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440782185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440794218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441533 containerd[1461]: time="2024-12-13T01:07:05.440812181Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:07:05.441832 containerd[1461]: time="2024-12-13T01:07:05.441630917Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:07:05.441832 containerd[1461]: time="2024-12-13T01:07:05.441660502Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:07:05.441832 containerd[1461]: time="2024-12-13T01:07:05.441679498Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:07:05.441832 containerd[1461]: time="2024-12-13T01:07:05.441693995Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:07:05.441832 containerd[1461]: time="2024-12-13T01:07:05.441706458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.441832 containerd[1461]: time="2024-12-13T01:07:05.441721036Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:07:05.441832 containerd[1461]: time="2024-12-13T01:07:05.441738298Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:07:05.441832 containerd[1461]: time="2024-12-13T01:07:05.441750281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:07:05.442293 containerd[1461]: time="2024-12-13T01:07:05.442198391Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:07:05.442293 containerd[1461]: time="2024-12-13T01:07:05.442262832Z" level=info msg="Connect containerd service" Dec 13 01:07:05.442468 containerd[1461]: time="2024-12-13T01:07:05.442317384Z" level=info msg="using legacy CRI server" Dec 13 01:07:05.442468 containerd[1461]: time="2024-12-13T01:07:05.442331541Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:07:05.442515 containerd[1461]: time="2024-12-13T01:07:05.442487924Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:07:05.444277 containerd[1461]: time="2024-12-13T01:07:05.444245169Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:07:05.444522 containerd[1461]: time="2024-12-13T01:07:05.444416501Z" level=info msg="Start subscribing containerd event" Dec 13 01:07:05.444570 containerd[1461]: time="2024-12-13T01:07:05.444531536Z" level=info msg="Start recovering state" Dec 13 01:07:05.444735 containerd[1461]: time="2024-12-13T01:07:05.444694442Z" level=info msg="Start event monitor" Dec 13 01:07:05.444764 containerd[1461]: time="2024-12-13T01:07:05.444748594Z" level=info msg="Start snapshots syncer" Dec 13 01:07:05.444784 containerd[1461]: time="2024-12-13T01:07:05.444768020Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:07:05.444815 containerd[1461]: time="2024-12-13T01:07:05.444787767Z" level=info msg="Start streaming server" Dec 13 01:07:05.444834 containerd[1461]: time="2024-12-13T01:07:05.444695754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:07:05.444903 containerd[1461]: time="2024-12-13T01:07:05.444874680Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:07:05.445760 containerd[1461]: time="2024-12-13T01:07:05.444956143Z" level=info msg="containerd successfully booted in 0.246638s" Dec 13 01:07:05.444996 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:07:05.502566 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:35094.service - OpenSSH per-connection server daemon (10.0.0.1:35094). Dec 13 01:07:05.545545 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 35094 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:05.547324 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:05.551513 systemd-logind[1444]: New session 2 of user core. Dec 13 01:07:05.559694 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:07:05.617523 sshd[1536]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:05.628232 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:35094.service: Deactivated successfully. Dec 13 01:07:05.629855 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:07:05.631339 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:07:05.632522 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:35100.service - OpenSSH per-connection server daemon (10.0.0.1:35100). Dec 13 01:07:05.653353 systemd-logind[1444]: Removed session 2. Dec 13 01:07:05.674559 systemd-networkd[1395]: eth0: Gained IPv6LL Dec 13 01:07:05.677666 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:07:05.679606 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:07:05.681913 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 35100 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:05.683474 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:05.693615 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:07:05.696448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:07:05.698796 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:07:05.709690 systemd-logind[1444]: New session 3 of user core. Dec 13 01:07:05.710457 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:07:05.720491 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:07:05.720773 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:07:05.753740 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:07:05.756135 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:07:05.766948 sshd[1543]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:05.771239 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:35100.service: Deactivated successfully. Dec 13 01:07:05.773235 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:07:05.773892 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:07:05.774686 systemd-logind[1444]: Removed session 3. Dec 13 01:07:06.802370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:07:06.804287 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:07:06.805742 systemd[1]: Startup finished in 750ms (kernel) + 6.153s (initrd) + 4.847s (userspace) = 11.751s. Dec 13 01:07:06.818193 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:07:07.671931 kubelet[1571]: E1213 01:07:07.671472 1571 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:07:07.678607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:07:07.678829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:07:07.679233 systemd[1]: kubelet.service: Consumed 1.842s CPU time. Dec 13 01:07:15.778829 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:44068.service - OpenSSH per-connection server daemon (10.0.0.1:44068). Dec 13 01:07:15.817489 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 44068 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:15.819309 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:15.824023 systemd-logind[1444]: New session 4 of user core. Dec 13 01:07:15.839668 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:07:15.894836 sshd[1585]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:15.904338 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:44068.service: Deactivated successfully. Dec 13 01:07:15.906332 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:07:15.907942 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:07:15.920645 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:44070.service - OpenSSH per-connection server daemon (10.0.0.1:44070). Dec 13 01:07:15.921869 systemd-logind[1444]: Removed session 4. Dec 13 01:07:15.950923 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 44070 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:15.952453 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:15.956552 systemd-logind[1444]: New session 5 of user core. Dec 13 01:07:15.971574 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:07:16.021534 sshd[1592]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:16.039163 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:44070.service: Deactivated successfully. Dec 13 01:07:16.040855 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:07:16.042109 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:07:16.043257 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:44084.service - OpenSSH per-connection server daemon (10.0.0.1:44084). Dec 13 01:07:16.043928 systemd-logind[1444]: Removed session 5. Dec 13 01:07:16.076960 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 44084 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:16.078263 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:16.082032 systemd-logind[1444]: New session 6 of user core. Dec 13 01:07:16.091502 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:07:16.146406 sshd[1599]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:16.158230 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:44084.service: Deactivated successfully. Dec 13 01:07:16.160138 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:07:16.161789 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:07:16.170626 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:44086.service - OpenSSH per-connection server daemon (10.0.0.1:44086). Dec 13 01:07:16.171603 systemd-logind[1444]: Removed session 6. Dec 13 01:07:16.201820 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 44086 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:16.203749 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:16.207850 systemd-logind[1444]: New session 7 of user core. Dec 13 01:07:16.217565 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:07:16.277615 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:07:16.278065 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:07:16.298620 sudo[1609]: pam_unix(sudo:session): session closed for user root Dec 13 01:07:16.300673 sshd[1606]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:16.318679 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:44086.service: Deactivated successfully. Dec 13 01:07:16.320542 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:07:16.321947 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:07:16.323311 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:44088.service - OpenSSH per-connection server daemon (10.0.0.1:44088). Dec 13 01:07:16.324247 systemd-logind[1444]: Removed session 7. Dec 13 01:07:16.357909 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 44088 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:16.359335 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:16.363270 systemd-logind[1444]: New session 8 of user core. Dec 13 01:07:16.373513 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:07:16.426673 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:07:16.427026 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:07:16.430812 sudo[1618]: pam_unix(sudo:session): session closed for user root Dec 13 01:07:16.437048 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:07:16.437444 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:07:16.455660 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:07:16.457595 auditctl[1621]: No rules Dec 13 01:07:16.458863 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:07:16.459141 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:07:16.461005 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:07:16.494076 augenrules[1639]: No rules Dec 13 01:07:16.495883 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:07:16.497276 sudo[1617]: pam_unix(sudo:session): session closed for user root Dec 13 01:07:16.499368 sshd[1614]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:16.515369 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:44088.service: Deactivated successfully. Dec 13 01:07:16.517166 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:07:16.518507 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:07:16.531691 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:44090.service - OpenSSH per-connection server daemon (10.0.0.1:44090). Dec 13 01:07:16.532547 systemd-logind[1444]: Removed session 8. Dec 13 01:07:16.563035 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 44090 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:16.564836 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:16.569122 systemd-logind[1444]: New session 9 of user core. Dec 13 01:07:16.578516 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:07:16.630499 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:07:16.630842 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:07:16.935709 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:07:16.935875 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:07:17.218183 dockerd[1668]: time="2024-12-13T01:07:17.218026793Z" level=info msg="Starting up" Dec 13 01:07:17.329423 dockerd[1668]: time="2024-12-13T01:07:17.329368488Z" level=info msg="Loading containers: start." Dec 13 01:07:17.451425 kernel: Initializing XFRM netlink socket Dec 13 01:07:17.533662 systemd-networkd[1395]: docker0: Link UP Dec 13 01:07:17.566434 dockerd[1668]: time="2024-12-13T01:07:17.566361384Z" level=info msg="Loading containers: done." Dec 13 01:07:17.581836 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck305138542-merged.mount: Deactivated successfully. Dec 13 01:07:17.582124 dockerd[1668]: time="2024-12-13T01:07:17.582054014Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:07:17.582178 dockerd[1668]: time="2024-12-13T01:07:17.582155985Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:07:17.582288 dockerd[1668]: time="2024-12-13T01:07:17.582264919Z" level=info msg="Daemon has completed initialization" Dec 13 01:07:17.626102 dockerd[1668]: time="2024-12-13T01:07:17.626006280Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:07:17.626279 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:07:17.913292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:07:17.922725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:07:18.120511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:07:18.127189 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:07:18.178262 kubelet[1824]: E1213 01:07:18.178066 1824 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:07:18.185525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:07:18.185761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:07:18.613548 containerd[1461]: time="2024-12-13T01:07:18.613437235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:07:20.335588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383348949.mount: Deactivated successfully. Dec 13 01:07:21.376477 containerd[1461]: time="2024-12-13T01:07:21.376419393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:21.377153 containerd[1461]: time="2024-12-13T01:07:21.377082597Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:07:21.378379 containerd[1461]: time="2024-12-13T01:07:21.378347369Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:21.380890 containerd[1461]: time="2024-12-13T01:07:21.380852186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:21.382107 containerd[1461]: time="2024-12-13T01:07:21.382019726Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.768535091s" Dec 13 01:07:21.382149 containerd[1461]: time="2024-12-13T01:07:21.382107771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:07:21.405380 containerd[1461]: time="2024-12-13T01:07:21.405344879Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:07:23.159190 containerd[1461]: time="2024-12-13T01:07:23.159118811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:23.160012 containerd[1461]: time="2024-12-13T01:07:23.159925784Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:07:23.161185 containerd[1461]: time="2024-12-13T01:07:23.161147195Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:23.163892 containerd[1461]: time="2024-12-13T01:07:23.163833823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:23.164736 containerd[1461]: time="2024-12-13T01:07:23.164681303Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.759306588s" Dec 13 01:07:23.164800 containerd[1461]: time="2024-12-13T01:07:23.164733340Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:07:23.187720 containerd[1461]: time="2024-12-13T01:07:23.187613208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:07:24.140329 containerd[1461]: time="2024-12-13T01:07:24.140270260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:24.141019 containerd[1461]: time="2024-12-13T01:07:24.140965334Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:07:24.142266 containerd[1461]: time="2024-12-13T01:07:24.142203986Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:24.145000 containerd[1461]: time="2024-12-13T01:07:24.144957540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:24.145967 containerd[1461]: time="2024-12-13T01:07:24.145934543Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 958.275869ms" Dec 13 01:07:24.146012 containerd[1461]: time="2024-12-13T01:07:24.145968787Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:07:24.167485 containerd[1461]: time="2024-12-13T01:07:24.167433491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:07:25.154962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount954146693.mount: Deactivated successfully. Dec 13 01:07:25.722414 containerd[1461]: time="2024-12-13T01:07:25.722336820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:25.723069 containerd[1461]: time="2024-12-13T01:07:25.722989894Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:07:25.724412 containerd[1461]: time="2024-12-13T01:07:25.724354784Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:25.726556 containerd[1461]: time="2024-12-13T01:07:25.726523591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:25.727339 containerd[1461]: time="2024-12-13T01:07:25.727283136Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.559805442s" Dec 13 01:07:25.727383 containerd[1461]: time="2024-12-13T01:07:25.727341675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:07:25.752477 containerd[1461]: time="2024-12-13T01:07:25.752417982Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:07:26.322979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160656924.mount: Deactivated successfully. Dec 13 01:07:26.926638 containerd[1461]: time="2024-12-13T01:07:26.926583061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:26.927300 containerd[1461]: time="2024-12-13T01:07:26.927230004Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:07:26.928435 containerd[1461]: time="2024-12-13T01:07:26.928389298Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:26.931262 containerd[1461]: time="2024-12-13T01:07:26.931231729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:26.933648 containerd[1461]: time="2024-12-13T01:07:26.933462172Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.180993674s" Dec 13 01:07:26.933648 containerd[1461]: time="2024-12-13T01:07:26.933499622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:07:26.956586 containerd[1461]: time="2024-12-13T01:07:26.956536204Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:07:27.502505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663789416.mount: Deactivated successfully. Dec 13 01:07:27.508019 containerd[1461]: time="2024-12-13T01:07:27.507983795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:27.508722 containerd[1461]: time="2024-12-13T01:07:27.508666205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:07:27.509797 containerd[1461]: time="2024-12-13T01:07:27.509769003Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:27.511863 containerd[1461]: time="2024-12-13T01:07:27.511830439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:27.512547 containerd[1461]: time="2024-12-13T01:07:27.512513620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 555.944133ms" Dec 13 01:07:27.512588 containerd[1461]: time="2024-12-13T01:07:27.512546943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:07:27.534249 containerd[1461]: time="2024-12-13T01:07:27.534187076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:07:28.149339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092025185.mount: Deactivated successfully. Dec 13 01:07:28.436001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:07:28.445673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:07:28.634072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:07:28.638207 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:07:28.797882 kubelet[2025]: E1213 01:07:28.797159 2025 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:07:28.802496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:07:28.802717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:07:30.554938 containerd[1461]: time="2024-12-13T01:07:30.554866117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:30.555766 containerd[1461]: time="2024-12-13T01:07:30.555718005Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:07:30.557739 containerd[1461]: time="2024-12-13T01:07:30.557696705Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:30.560657 containerd[1461]: time="2024-12-13T01:07:30.560625778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:07:30.561647 containerd[1461]: time="2024-12-13T01:07:30.561614092Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.02738613s" Dec 13 01:07:30.561717 containerd[1461]: time="2024-12-13T01:07:30.561649108Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:07:33.170983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:07:33.179759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:07:33.209204 systemd[1]: Reloading requested from client PID 2140 ('systemctl') (unit session-9.scope)... Dec 13 01:07:33.209219 systemd[1]: Reloading... Dec 13 01:07:33.286441 zram_generator::config[2179]: No configuration found. Dec 13 01:07:33.485161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:07:33.560109 systemd[1]: Reloading finished in 350 ms. Dec 13 01:07:33.614133 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:07:33.614382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:07:33.616876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:07:33.755612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:07:33.760799 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:07:33.801836 kubelet[2228]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:07:33.801836 kubelet[2228]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:07:33.801836 kubelet[2228]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:07:33.801836 kubelet[2228]: I1213 01:07:33.801466 2228 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:07:34.350112 kubelet[2228]: I1213 01:07:34.350076 2228 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:07:34.350112 kubelet[2228]: I1213 01:07:34.350105 2228 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:07:34.350333 kubelet[2228]: I1213 01:07:34.350318 2228 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:07:34.365667 kubelet[2228]: E1213 01:07:34.365635 2228 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.366302 kubelet[2228]: I1213 01:07:34.366276 2228 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:07:34.376879 kubelet[2228]: I1213 01:07:34.376850 2228 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:07:34.378102 kubelet[2228]: I1213 01:07:34.378074 2228 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:07:34.378267 kubelet[2228]: I1213 01:07:34.378242 2228 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:07:34.378346 kubelet[2228]: I1213 01:07:34.378278 2228 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:07:34.378346 kubelet[2228]: I1213 01:07:34.378289 2228 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:07:34.378432 kubelet[2228]: I1213 01:07:34.378415 2228 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:07:34.378554 kubelet[2228]: I1213 01:07:34.378530 2228 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:07:34.378554 kubelet[2228]: I1213 01:07:34.378555 2228 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:07:34.378610 kubelet[2228]: I1213 01:07:34.378594 2228 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:07:34.378633 kubelet[2228]: I1213 01:07:34.378612 2228 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:07:34.379682 kubelet[2228]: W1213 01:07:34.379588 2228 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.379682 kubelet[2228]: E1213 01:07:34.379635 2228 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.380412 kubelet[2228]: W1213 01:07:34.380343 2228 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.380412 kubelet[2228]: E1213 01:07:34.380409 2228 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.380680 kubelet[2228]: I1213 01:07:34.380659 2228 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:07:34.383248 kubelet[2228]: I1213 01:07:34.383219 2228 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:07:34.384072 kubelet[2228]: W1213 01:07:34.384043 2228 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:07:34.384847 kubelet[2228]: I1213 01:07:34.384677 2228 server.go:1256] "Started kubelet" Dec 13 01:07:34.385455 kubelet[2228]: I1213 01:07:34.385289 2228 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:07:34.388777 kubelet[2228]: I1213 01:07:34.388190 2228 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:07:34.388777 kubelet[2228]: I1213 01:07:34.388696 2228 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:07:34.389826 kubelet[2228]: I1213 01:07:34.389683 2228 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:07:34.390772 kubelet[2228]: E1213 01:07:34.390749 2228 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109727f3a20d77 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:07:34.384643447 +0000 UTC m=+0.619691063,LastTimestamp:2024-12-13 01:07:34.384643447 +0000 UTC m=+0.619691063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:07:34.391471 kubelet[2228]: I1213 01:07:34.391456 2228 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:07:34.391814 kubelet[2228]: I1213 01:07:34.391799 2228 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:07:34.392002 kubelet[2228]: I1213 01:07:34.391991 2228 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:07:34.392430 kubelet[2228]: W1213 01:07:34.392133 2228 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.392430 kubelet[2228]: E1213 01:07:34.392181 2228 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.392430 kubelet[2228]: E1213 01:07:34.392249 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Dec 13 01:07:34.393045 kubelet[2228]: I1213 01:07:34.393018 2228 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:07:34.394007 kubelet[2228]: I1213 01:07:34.393668 2228 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:07:34.394007 kubelet[2228]: I1213 01:07:34.393684 2228 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:07:34.395027 kubelet[2228]: I1213 01:07:34.395012 2228 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:07:34.395086 kubelet[2228]: E1213 01:07:34.395059 2228 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:07:34.405659 kubelet[2228]: I1213 01:07:34.405623 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:07:34.406842 kubelet[2228]: I1213 01:07:34.406811 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:07:34.406888 kubelet[2228]: I1213 01:07:34.406851 2228 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:07:34.406888 kubelet[2228]: I1213 01:07:34.406873 2228 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:07:34.406941 kubelet[2228]: E1213 01:07:34.406919 2228 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:07:34.411182 kubelet[2228]: W1213 01:07:34.411139 2228 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.411182 kubelet[2228]: E1213 01:07:34.411181 2228 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:34.411991 kubelet[2228]: I1213 01:07:34.411971 2228 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:07:34.411991 kubelet[2228]: I1213 01:07:34.411989 2228 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:07:34.412057 kubelet[2228]: I1213 01:07:34.412007 2228 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:07:34.492933 kubelet[2228]: I1213 01:07:34.492911 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:07:34.493207 kubelet[2228]: E1213 01:07:34.493189 2228 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Dec 13 01:07:34.507449 kubelet[2228]: E1213 01:07:34.507422 2228 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:07:34.593143 kubelet[2228]: E1213 01:07:34.593116 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Dec 13 01:07:34.676210 kubelet[2228]: I1213 01:07:34.676107 2228 policy_none.go:49] "None policy: Start" Dec 13 01:07:34.677093 kubelet[2228]: I1213 01:07:34.677037 2228 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:07:34.677093 kubelet[2228]: I1213 01:07:34.677071 2228 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:07:34.685578 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:07:34.694233 kubelet[2228]: I1213 01:07:34.694199 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:07:34.694625 kubelet[2228]: E1213 01:07:34.694595 2228 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Dec 13 01:07:34.701168 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:07:34.704459 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:07:34.708288 kubelet[2228]: E1213 01:07:34.708251 2228 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:07:34.716446 kubelet[2228]: I1213 01:07:34.716421 2228 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:07:34.717087 kubelet[2228]: I1213 01:07:34.716996 2228 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:07:34.718158 kubelet[2228]: E1213 01:07:34.718139 2228 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:07:34.994089 kubelet[2228]: E1213 01:07:34.993979 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Dec 13 01:07:35.096708 kubelet[2228]: I1213 01:07:35.096667 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:07:35.097110 kubelet[2228]: E1213 01:07:35.097081 2228 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Dec 13 01:07:35.109237 kubelet[2228]: I1213 01:07:35.109198 2228 topology_manager.go:215] "Topology Admit Handler" podUID="e4d96c40a745489096e97d4a82683662" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:07:35.110438 kubelet[2228]: I1213 01:07:35.110388 2228 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:07:35.111427 kubelet[2228]: I1213 01:07:35.111389 2228 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:07:35.117002 systemd[1]: Created slice kubepods-burstable-pode4d96c40a745489096e97d4a82683662.slice - libcontainer container kubepods-burstable-pode4d96c40a745489096e97d4a82683662.slice. Dec 13 01:07:35.132334 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 01:07:35.148085 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 01:07:35.195923 kubelet[2228]: I1213 01:07:35.195890 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:35.195923 kubelet[2228]: I1213 01:07:35.195927 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:35.195923 kubelet[2228]: I1213 01:07:35.195947 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:35.196134 kubelet[2228]: I1213 01:07:35.195972 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:35.196134 kubelet[2228]: I1213 01:07:35.195994 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4d96c40a745489096e97d4a82683662-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e4d96c40a745489096e97d4a82683662\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:07:35.196134 kubelet[2228]: I1213 01:07:35.196087 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4d96c40a745489096e97d4a82683662-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4d96c40a745489096e97d4a82683662\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:07:35.196241 kubelet[2228]: I1213 01:07:35.196204 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:35.196278 kubelet[2228]: I1213 01:07:35.196247 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:07:35.196278 kubelet[2228]: I1213 01:07:35.196267 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4d96c40a745489096e97d4a82683662-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4d96c40a745489096e97d4a82683662\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:07:35.298920 kubelet[2228]: W1213 01:07:35.298786 2228 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:35.298920 kubelet[2228]: E1213 01:07:35.298832 2228 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:35.382789 kubelet[2228]: W1213 01:07:35.382731 2228 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:35.382789 kubelet[2228]: E1213 01:07:35.382766 2228 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:35.431068 kubelet[2228]: E1213 01:07:35.431033 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:35.431717 containerd[1461]: time="2024-12-13T01:07:35.431660868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e4d96c40a745489096e97d4a82683662,Namespace:kube-system,Attempt:0,}" Dec 13 01:07:35.446055 kubelet[2228]: E1213 01:07:35.446014 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:35.446563 containerd[1461]: time="2024-12-13T01:07:35.446532488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:07:35.450810 kubelet[2228]: E1213 01:07:35.450789 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:35.451129 containerd[1461]: time="2024-12-13T01:07:35.451094954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:07:35.518360 kubelet[2228]: W1213 01:07:35.518279 2228 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:35.518360 kubelet[2228]: E1213 01:07:35.518366 2228 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:35.708789 kubelet[2228]: W1213 01:07:35.708639 2228 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:35.708789 kubelet[2228]: E1213 01:07:35.708706 2228 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:35.795528 kubelet[2228]: E1213 01:07:35.795482 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Dec 13 01:07:35.899226 kubelet[2228]: I1213 01:07:35.899182 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:07:35.899541 kubelet[2228]: E1213 01:07:35.899521 2228 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Dec 13 01:07:36.084482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598272405.mount: Deactivated successfully. Dec 13 01:07:36.091747 containerd[1461]: time="2024-12-13T01:07:36.091689708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:07:36.092759 containerd[1461]: time="2024-12-13T01:07:36.092716413Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:07:36.093886 containerd[1461]: time="2024-12-13T01:07:36.093805265Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:07:36.094781 containerd[1461]: time="2024-12-13T01:07:36.094748073Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:07:36.096064 containerd[1461]: time="2024-12-13T01:07:36.096023064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:07:36.097020 containerd[1461]: time="2024-12-13T01:07:36.096980400Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:07:36.097928 containerd[1461]: time="2024-12-13T01:07:36.097873735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:07:36.100908 containerd[1461]: time="2024-12-13T01:07:36.100867489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:07:36.102497 containerd[1461]: time="2024-12-13T01:07:36.102461458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 670.718717ms" Dec 13 01:07:36.103266 containerd[1461]: time="2024-12-13T01:07:36.103230771Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.08909ms" Dec 13 01:07:36.103973 containerd[1461]: time="2024-12-13T01:07:36.103937196Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.347862ms" Dec 13 01:07:36.251115 containerd[1461]: time="2024-12-13T01:07:36.251032502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:07:36.251115 containerd[1461]: time="2024-12-13T01:07:36.251084710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:07:36.251115 containerd[1461]: time="2024-12-13T01:07:36.251099538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:36.251469 containerd[1461]: time="2024-12-13T01:07:36.251177294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:36.252849 containerd[1461]: time="2024-12-13T01:07:36.252560538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:07:36.252849 containerd[1461]: time="2024-12-13T01:07:36.252600543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:07:36.252849 containerd[1461]: time="2024-12-13T01:07:36.252614479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:36.252849 containerd[1461]: time="2024-12-13T01:07:36.252722572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:36.255698 containerd[1461]: time="2024-12-13T01:07:36.255546508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:07:36.255698 containerd[1461]: time="2024-12-13T01:07:36.255591552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:07:36.255698 containerd[1461]: time="2024-12-13T01:07:36.255601811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:36.256421 containerd[1461]: time="2024-12-13T01:07:36.255667815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:36.281549 systemd[1]: Started cri-containerd-2e549c299e9af99b299111106c57ecd151731e4acd7b57b6fa8c4df520142113.scope - libcontainer container 2e549c299e9af99b299111106c57ecd151731e4acd7b57b6fa8c4df520142113. Dec 13 01:07:36.283239 systemd[1]: Started cri-containerd-5c742736950d46147387ba2fbc5ee467fcf75459081f7f2efb23a06fec42b4a4.scope - libcontainer container 5c742736950d46147387ba2fbc5ee467fcf75459081f7f2efb23a06fec42b4a4. Dec 13 01:07:36.285212 systemd[1]: Started cri-containerd-cb8ae1d98b3169b8d1bbfc09409c46f26bb9cf3967bc7cfc190d74bb45d33857.scope - libcontainer container cb8ae1d98b3169b8d1bbfc09409c46f26bb9cf3967bc7cfc190d74bb45d33857. Dec 13 01:07:36.328462 containerd[1461]: time="2024-12-13T01:07:36.328174671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e4d96c40a745489096e97d4a82683662,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e549c299e9af99b299111106c57ecd151731e4acd7b57b6fa8c4df520142113\"" Dec 13 01:07:36.330835 kubelet[2228]: E1213 01:07:36.330727 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:36.331601 containerd[1461]: time="2024-12-13T01:07:36.331293741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb8ae1d98b3169b8d1bbfc09409c46f26bb9cf3967bc7cfc190d74bb45d33857\"" Dec 13 01:07:36.332061 kubelet[2228]: E1213 01:07:36.332043 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:36.334974 containerd[1461]: time="2024-12-13T01:07:36.334826967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c742736950d46147387ba2fbc5ee467fcf75459081f7f2efb23a06fec42b4a4\"" Dec 13 01:07:36.335204 containerd[1461]: time="2024-12-13T01:07:36.335132790Z" level=info msg="CreateContainer within sandbox \"cb8ae1d98b3169b8d1bbfc09409c46f26bb9cf3967bc7cfc190d74bb45d33857\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:07:36.335204 containerd[1461]: time="2024-12-13T01:07:36.335158448Z" level=info msg="CreateContainer within sandbox \"2e549c299e9af99b299111106c57ecd151731e4acd7b57b6fa8c4df520142113\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:07:36.335784 kubelet[2228]: E1213 01:07:36.335756 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:36.337598 containerd[1461]: time="2024-12-13T01:07:36.337567536Z" level=info msg="CreateContainer within sandbox \"5c742736950d46147387ba2fbc5ee467fcf75459081f7f2efb23a06fec42b4a4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:07:36.365470 containerd[1461]: time="2024-12-13T01:07:36.365422264Z" level=info msg="CreateContainer within sandbox \"cb8ae1d98b3169b8d1bbfc09409c46f26bb9cf3967bc7cfc190d74bb45d33857\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"942f9be38a386e57e12bd123900f3247eff2aef5d4f5d089141256556b27a430\"" Dec 13 01:07:36.366010 containerd[1461]: time="2024-12-13T01:07:36.365981723Z" level=info msg="StartContainer for \"942f9be38a386e57e12bd123900f3247eff2aef5d4f5d089141256556b27a430\"" Dec 13 01:07:36.371132 containerd[1461]: time="2024-12-13T01:07:36.371093069Z" level=info msg="CreateContainer within sandbox \"5c742736950d46147387ba2fbc5ee467fcf75459081f7f2efb23a06fec42b4a4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f54079867d9e3f4b132765f109b260269b3a23639d5f6355bda6b9b103146e9b\"" Dec 13 01:07:36.371893 containerd[1461]: time="2024-12-13T01:07:36.371690479Z" level=info msg="StartContainer for \"f54079867d9e3f4b132765f109b260269b3a23639d5f6355bda6b9b103146e9b\"" Dec 13 01:07:36.373447 containerd[1461]: time="2024-12-13T01:07:36.373391168Z" level=info msg="CreateContainer within sandbox \"2e549c299e9af99b299111106c57ecd151731e4acd7b57b6fa8c4df520142113\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f5237b2383cc2cf1f5a9c2c3f495ada345339fc247324f26abde170a1eac163d\"" Dec 13 01:07:36.374029 containerd[1461]: time="2024-12-13T01:07:36.374007354Z" level=info msg="StartContainer for \"f5237b2383cc2cf1f5a9c2c3f495ada345339fc247324f26abde170a1eac163d\"" Dec 13 01:07:36.375421 kubelet[2228]: E1213 01:07:36.375383 2228 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Dec 13 01:07:36.400610 systemd[1]: Started cri-containerd-942f9be38a386e57e12bd123900f3247eff2aef5d4f5d089141256556b27a430.scope - libcontainer container 942f9be38a386e57e12bd123900f3247eff2aef5d4f5d089141256556b27a430. Dec 13 01:07:36.405372 systemd[1]: Started cri-containerd-f5237b2383cc2cf1f5a9c2c3f495ada345339fc247324f26abde170a1eac163d.scope - libcontainer container f5237b2383cc2cf1f5a9c2c3f495ada345339fc247324f26abde170a1eac163d. Dec 13 01:07:36.407782 systemd[1]: Started cri-containerd-f54079867d9e3f4b132765f109b260269b3a23639d5f6355bda6b9b103146e9b.scope - libcontainer container f54079867d9e3f4b132765f109b260269b3a23639d5f6355bda6b9b103146e9b. Dec 13 01:07:36.451213 containerd[1461]: time="2024-12-13T01:07:36.451147589Z" level=info msg="StartContainer for \"942f9be38a386e57e12bd123900f3247eff2aef5d4f5d089141256556b27a430\" returns successfully" Dec 13 01:07:36.454427 containerd[1461]: time="2024-12-13T01:07:36.454306393Z" level=info msg="StartContainer for \"f5237b2383cc2cf1f5a9c2c3f495ada345339fc247324f26abde170a1eac163d\" returns successfully" Dec 13 01:07:36.460813 containerd[1461]: time="2024-12-13T01:07:36.460763312Z" level=info msg="StartContainer for \"f54079867d9e3f4b132765f109b260269b3a23639d5f6355bda6b9b103146e9b\" returns successfully" Dec 13 01:07:37.417131 kubelet[2228]: E1213 01:07:37.417092 2228 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:07:37.433419 kubelet[2228]: E1213 01:07:37.431821 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:37.436956 kubelet[2228]: E1213 01:07:37.436918 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:37.439040 kubelet[2228]: E1213 01:07:37.438383 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:37.500679 kubelet[2228]: I1213 01:07:37.500554 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:07:37.507319 kubelet[2228]: I1213 01:07:37.507287 2228 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:07:37.517455 kubelet[2228]: E1213 01:07:37.516673 2228 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:07:37.617040 kubelet[2228]: E1213 01:07:37.616990 2228 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:07:37.717647 kubelet[2228]: E1213 01:07:37.717502 2228 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:07:37.818099 kubelet[2228]: E1213 01:07:37.818058 2228 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:07:37.918679 kubelet[2228]: E1213 01:07:37.918649 2228 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:07:38.019936 kubelet[2228]: E1213 01:07:38.019797 2228 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:07:38.120145 kubelet[2228]: E1213 01:07:38.120122 2228 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:07:38.381888 kubelet[2228]: I1213 01:07:38.381755 2228 apiserver.go:52] "Watching apiserver" Dec 13 01:07:38.392496 kubelet[2228]: I1213 01:07:38.392445 2228 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:07:38.445868 kubelet[2228]: E1213 01:07:38.445780 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:38.446862 kubelet[2228]: E1213 01:07:38.446823 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:38.446919 kubelet[2228]: E1213 01:07:38.446908 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:39.440882 kubelet[2228]: E1213 01:07:39.440840 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:39.441201 kubelet[2228]: E1213 01:07:39.441063 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:39.441335 kubelet[2228]: E1213 01:07:39.441308 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:40.354165 systemd[1]: Reloading requested from client PID 2511 ('systemctl') (unit session-9.scope)... Dec 13 01:07:40.354181 systemd[1]: Reloading... Dec 13 01:07:40.419423 zram_generator::config[2550]: No configuration found. Dec 13 01:07:40.442686 kubelet[2228]: E1213 01:07:40.442652 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:40.531581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:07:40.620661 systemd[1]: Reloading finished in 266 ms. Dec 13 01:07:40.660425 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:07:40.673995 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:07:40.674302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:07:40.674358 systemd[1]: kubelet.service: Consumed 1.081s CPU time, 114.2M memory peak, 0B memory swap peak. Dec 13 01:07:40.685629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:07:40.829816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:07:40.835854 (kubelet)[2595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:07:40.886453 kubelet[2595]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:07:40.886453 kubelet[2595]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:07:40.886453 kubelet[2595]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:07:40.886812 kubelet[2595]: I1213 01:07:40.886459 2595 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:07:40.891453 kubelet[2595]: I1213 01:07:40.891413 2595 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:07:40.891453 kubelet[2595]: I1213 01:07:40.891444 2595 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:07:40.891706 kubelet[2595]: I1213 01:07:40.891684 2595 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:07:40.893824 kubelet[2595]: I1213 01:07:40.893804 2595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:07:40.897346 kubelet[2595]: I1213 01:07:40.897300 2595 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:07:40.908623 kubelet[2595]: I1213 01:07:40.908569 2595 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:07:40.908908 kubelet[2595]: I1213 01:07:40.908888 2595 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:07:40.909130 kubelet[2595]: I1213 01:07:40.909097 2595 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:07:40.909232 kubelet[2595]: I1213 01:07:40.909136 2595 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:07:40.909232 kubelet[2595]: I1213 01:07:40.909150 2595 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:07:40.909232 kubelet[2595]: I1213 01:07:40.909188 2595 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:07:40.909330 kubelet[2595]: I1213 01:07:40.909300 2595 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:07:40.909330 kubelet[2595]: I1213 01:07:40.909320 2595 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:07:40.909387 kubelet[2595]: I1213 01:07:40.909350 2595 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:07:40.909387 kubelet[2595]: I1213 01:07:40.909376 2595 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:07:40.911768 kubelet[2595]: I1213 01:07:40.911710 2595 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:07:40.912019 kubelet[2595]: I1213 01:07:40.911973 2595 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:07:40.912650 kubelet[2595]: I1213 01:07:40.912612 2595 server.go:1256] "Started kubelet" Dec 13 01:07:40.912821 kubelet[2595]: I1213 01:07:40.912768 2595 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:07:40.912962 kubelet[2595]: I1213 01:07:40.912934 2595 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:07:40.913243 kubelet[2595]: I1213 01:07:40.913216 2595 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:07:40.914096 kubelet[2595]: I1213 01:07:40.914066 2595 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:07:40.915934 kubelet[2595]: E1213 01:07:40.915886 2595 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:07:40.917225 kubelet[2595]: I1213 01:07:40.917169 2595 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:07:40.921761 kubelet[2595]: I1213 01:07:40.921715 2595 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:07:40.922714 kubelet[2595]: I1213 01:07:40.922687 2595 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:07:40.924411 kubelet[2595]: I1213 01:07:40.922992 2595 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:07:40.927266 kubelet[2595]: I1213 01:07:40.927219 2595 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:07:40.927370 kubelet[2595]: I1213 01:07:40.927335 2595 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:07:40.934982 kubelet[2595]: I1213 01:07:40.934944 2595 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:07:40.952461 kubelet[2595]: I1213 01:07:40.952419 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:07:40.954923 kubelet[2595]: I1213 01:07:40.954887 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:07:40.954993 kubelet[2595]: I1213 01:07:40.954961 2595 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:07:40.955193 kubelet[2595]: I1213 01:07:40.954991 2595 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:07:40.956856 kubelet[2595]: E1213 01:07:40.956505 2595 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:07:40.986160 kubelet[2595]: I1213 01:07:40.986130 2595 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:07:40.986342 kubelet[2595]: I1213 01:07:40.986329 2595 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:07:40.986486 kubelet[2595]: I1213 01:07:40.986472 2595 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:07:40.986733 kubelet[2595]: I1213 01:07:40.986719 2595 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:07:40.986870 kubelet[2595]: I1213 01:07:40.986857 2595 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:07:40.986931 kubelet[2595]: I1213 01:07:40.986921 2595 policy_none.go:49] "None policy: Start" Dec 13 01:07:40.987596 kubelet[2595]: I1213 01:07:40.987577 2595 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:07:40.987685 kubelet[2595]: I1213 01:07:40.987672 2595 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:07:40.987906 kubelet[2595]: I1213 01:07:40.987891 2595 state_mem.go:75] "Updated machine memory state" Dec 13 01:07:40.993527 kubelet[2595]: I1213 01:07:40.993473 2595 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:07:40.994122 kubelet[2595]: I1213 01:07:40.993852 2595 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:07:41.044907 kubelet[2595]: I1213 01:07:41.044775 2595 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:07:41.051971 kubelet[2595]: I1213 01:07:41.051937 2595 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:07:41.052115 kubelet[2595]: I1213 01:07:41.052047 2595 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:07:41.057571 kubelet[2595]: I1213 01:07:41.057535 2595 topology_manager.go:215] "Topology Admit Handler" podUID="e4d96c40a745489096e97d4a82683662" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:07:41.057747 kubelet[2595]: I1213 01:07:41.057633 2595 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:07:41.057747 kubelet[2595]: I1213 01:07:41.057684 2595 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:07:41.063491 kubelet[2595]: E1213 01:07:41.063448 2595 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:07:41.064149 kubelet[2595]: E1213 01:07:41.064086 2595 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:07:41.064299 kubelet[2595]: E1213 01:07:41.064175 2595 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:41.123746 kubelet[2595]: I1213 01:07:41.123673 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4d96c40a745489096e97d4a82683662-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4d96c40a745489096e97d4a82683662\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:07:41.123746 kubelet[2595]: I1213 01:07:41.123734 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:41.123746 kubelet[2595]: I1213 01:07:41.123762 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:41.124039 kubelet[2595]: I1213 01:07:41.123819 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:41.124039 kubelet[2595]: I1213 01:07:41.123849 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:41.124039 kubelet[2595]: I1213 01:07:41.123875 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:07:41.124039 kubelet[2595]: I1213 01:07:41.123901 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4d96c40a745489096e97d4a82683662-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4d96c40a745489096e97d4a82683662\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:07:41.124039 kubelet[2595]: I1213 01:07:41.123924 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4d96c40a745489096e97d4a82683662-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e4d96c40a745489096e97d4a82683662\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:07:41.124156 kubelet[2595]: I1213 01:07:41.123950 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:07:41.365718 kubelet[2595]: E1213 01:07:41.365665 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:41.365718 kubelet[2595]: E1213 01:07:41.365710 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:41.366454 kubelet[2595]: E1213 01:07:41.366138 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:41.910061 kubelet[2595]: I1213 01:07:41.910002 2595 apiserver.go:52] "Watching apiserver" Dec 13 01:07:41.922948 kubelet[2595]: I1213 01:07:41.922895 2595 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:07:41.976797 kubelet[2595]: E1213 01:07:41.975176 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:41.977696 kubelet[2595]: E1213 01:07:41.977663 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:41.977931 kubelet[2595]: E1213 01:07:41.977918 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:42.006471 kubelet[2595]: I1213 01:07:42.006417 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.006336004 podStartE2EDuration="4.006336004s" podCreationTimestamp="2024-12-13 01:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:07:42.004421897 +0000 UTC m=+1.163690026" watchObservedRunningTime="2024-12-13 01:07:42.006336004 +0000 UTC m=+1.165604132" Dec 13 01:07:42.012382 kubelet[2595]: I1213 01:07:42.012241 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.01219817 podStartE2EDuration="4.01219817s" podCreationTimestamp="2024-12-13 01:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:07:42.01204403 +0000 UTC m=+1.171312158" watchObservedRunningTime="2024-12-13 01:07:42.01219817 +0000 UTC m=+1.171466298" Dec 13 01:07:42.034953 kubelet[2595]: I1213 01:07:42.034516 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.034474706 podStartE2EDuration="4.034474706s" podCreationTimestamp="2024-12-13 01:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:07:42.021366145 +0000 UTC m=+1.180634273" watchObservedRunningTime="2024-12-13 01:07:42.034474706 +0000 UTC m=+1.193742834" Dec 13 01:07:42.977012 kubelet[2595]: E1213 01:07:42.976970 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:43.979115 kubelet[2595]: E1213 01:07:43.979076 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:45.126514 sudo[1650]: pam_unix(sudo:session): session closed for user root Dec 13 01:07:45.128711 sshd[1647]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:45.133879 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:44090.service: Deactivated successfully. Dec 13 01:07:45.136498 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:07:45.136749 systemd[1]: session-9.scope: Consumed 4.832s CPU time, 188.9M memory peak, 0B memory swap peak. Dec 13 01:07:45.137575 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:07:45.138626 systemd-logind[1444]: Removed session 9. Dec 13 01:07:46.247301 kubelet[2595]: E1213 01:07:46.247257 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:46.983147 kubelet[2595]: E1213 01:07:46.983109 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:50.056125 update_engine[1448]: I20241213 01:07:50.055923 1448 update_attempter.cc:509] Updating boot flags... Dec 13 01:07:50.091453 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2690) Dec 13 01:07:50.122460 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2692) Dec 13 01:07:50.161442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2692) Dec 13 01:07:50.649495 kubelet[2595]: E1213 01:07:50.649434 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:50.988276 kubelet[2595]: E1213 01:07:50.988142 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:52.101555 kubelet[2595]: E1213 01:07:52.101516 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:52.992564 kubelet[2595]: E1213 01:07:52.992520 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:53.658335 kubelet[2595]: I1213 01:07:53.658302 2595 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:07:53.658804 containerd[1461]: time="2024-12-13T01:07:53.658768004Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:07:53.659092 kubelet[2595]: I1213 01:07:53.659027 2595 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:07:54.335824 kubelet[2595]: I1213 01:07:54.335697 2595 topology_manager.go:215] "Topology Admit Handler" podUID="b4462de4-6632-4879-98d0-3738258ae2c2" podNamespace="kube-system" podName="kube-proxy-nkgxd" Dec 13 01:07:54.343527 systemd[1]: Created slice kubepods-besteffort-podb4462de4_6632_4879_98d0_3738258ae2c2.slice - libcontainer container kubepods-besteffort-podb4462de4_6632_4879_98d0_3738258ae2c2.slice. Dec 13 01:07:54.504876 kubelet[2595]: I1213 01:07:54.504829 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4462de4-6632-4879-98d0-3738258ae2c2-kube-proxy\") pod \"kube-proxy-nkgxd\" (UID: \"b4462de4-6632-4879-98d0-3738258ae2c2\") " pod="kube-system/kube-proxy-nkgxd" Dec 13 01:07:54.504876 kubelet[2595]: I1213 01:07:54.504880 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qqd\" (UniqueName: \"kubernetes.io/projected/b4462de4-6632-4879-98d0-3738258ae2c2-kube-api-access-d7qqd\") pod \"kube-proxy-nkgxd\" (UID: \"b4462de4-6632-4879-98d0-3738258ae2c2\") " pod="kube-system/kube-proxy-nkgxd" Dec 13 01:07:54.505067 kubelet[2595]: I1213 01:07:54.504916 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4462de4-6632-4879-98d0-3738258ae2c2-xtables-lock\") pod \"kube-proxy-nkgxd\" (UID: \"b4462de4-6632-4879-98d0-3738258ae2c2\") " pod="kube-system/kube-proxy-nkgxd" Dec 13 01:07:54.505067 kubelet[2595]: I1213 01:07:54.504944 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4462de4-6632-4879-98d0-3738258ae2c2-lib-modules\") pod \"kube-proxy-nkgxd\" (UID: \"b4462de4-6632-4879-98d0-3738258ae2c2\") " pod="kube-system/kube-proxy-nkgxd" Dec 13 01:07:54.655672 kubelet[2595]: E1213 01:07:54.655603 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:54.656416 containerd[1461]: time="2024-12-13T01:07:54.656337322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nkgxd,Uid:b4462de4-6632-4879-98d0-3738258ae2c2,Namespace:kube-system,Attempt:0,}" Dec 13 01:07:54.693714 containerd[1461]: time="2024-12-13T01:07:54.693367449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:07:54.693714 containerd[1461]: time="2024-12-13T01:07:54.693468971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:07:54.693714 containerd[1461]: time="2024-12-13T01:07:54.693488187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:54.694497 containerd[1461]: time="2024-12-13T01:07:54.693605086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:54.727550 systemd[1]: Started cri-containerd-43e1e02fe44b56efc25d625e3a1b8912d9738d927d42070cbb4cc06c9d25bfa8.scope - libcontainer container 43e1e02fe44b56efc25d625e3a1b8912d9738d927d42070cbb4cc06c9d25bfa8. Dec 13 01:07:54.751598 containerd[1461]: time="2024-12-13T01:07:54.751519907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nkgxd,Uid:b4462de4-6632-4879-98d0-3738258ae2c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"43e1e02fe44b56efc25d625e3a1b8912d9738d927d42070cbb4cc06c9d25bfa8\"" Dec 13 01:07:54.752325 kubelet[2595]: E1213 01:07:54.752302 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:54.754573 containerd[1461]: time="2024-12-13T01:07:54.754450354Z" level=info msg="CreateContainer within sandbox \"43e1e02fe44b56efc25d625e3a1b8912d9738d927d42070cbb4cc06c9d25bfa8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:07:54.771024 containerd[1461]: time="2024-12-13T01:07:54.770985952Z" level=info msg="CreateContainer within sandbox \"43e1e02fe44b56efc25d625e3a1b8912d9738d927d42070cbb4cc06c9d25bfa8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"788675f0a2e5d4568c1616afa34b0cfa90dd5c74b6ed975378b8a19cd1ca37c0\"" Dec 13 01:07:54.771603 containerd[1461]: time="2024-12-13T01:07:54.771563318Z" level=info msg="StartContainer for \"788675f0a2e5d4568c1616afa34b0cfa90dd5c74b6ed975378b8a19cd1ca37c0\"" Dec 13 01:07:54.803133 kubelet[2595]: I1213 01:07:54.802607 2595 topology_manager.go:215] "Topology Admit Handler" podUID="e68ce6b2-bdc9-46cd-9d45-514b492bf8ad" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-h4f9p" Dec 13 01:07:54.818548 systemd[1]: Started cri-containerd-788675f0a2e5d4568c1616afa34b0cfa90dd5c74b6ed975378b8a19cd1ca37c0.scope - libcontainer container 788675f0a2e5d4568c1616afa34b0cfa90dd5c74b6ed975378b8a19cd1ca37c0. Dec 13 01:07:54.823090 systemd[1]: Created slice kubepods-besteffort-pode68ce6b2_bdc9_46cd_9d45_514b492bf8ad.slice - libcontainer container kubepods-besteffort-pode68ce6b2_bdc9_46cd_9d45_514b492bf8ad.slice. Dec 13 01:07:54.851668 containerd[1461]: time="2024-12-13T01:07:54.851556503Z" level=info msg="StartContainer for \"788675f0a2e5d4568c1616afa34b0cfa90dd5c74b6ed975378b8a19cd1ca37c0\" returns successfully" Dec 13 01:07:54.907322 kubelet[2595]: I1213 01:07:54.907209 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e68ce6b2-bdc9-46cd-9d45-514b492bf8ad-var-lib-calico\") pod \"tigera-operator-c7ccbd65-h4f9p\" (UID: \"e68ce6b2-bdc9-46cd-9d45-514b492bf8ad\") " pod="tigera-operator/tigera-operator-c7ccbd65-h4f9p" Dec 13 01:07:54.907322 kubelet[2595]: I1213 01:07:54.907255 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc6pp\" (UniqueName: \"kubernetes.io/projected/e68ce6b2-bdc9-46cd-9d45-514b492bf8ad-kube-api-access-mc6pp\") pod \"tigera-operator-c7ccbd65-h4f9p\" (UID: \"e68ce6b2-bdc9-46cd-9d45-514b492bf8ad\") " pod="tigera-operator/tigera-operator-c7ccbd65-h4f9p" Dec 13 01:07:54.995530 kubelet[2595]: E1213 01:07:54.995504 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:55.030036 kubelet[2595]: I1213 01:07:55.029977 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nkgxd" podStartSLOduration=1.029924636 podStartE2EDuration="1.029924636s" podCreationTimestamp="2024-12-13 01:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:07:55.029775025 +0000 UTC m=+14.189043153" watchObservedRunningTime="2024-12-13 01:07:55.029924636 +0000 UTC m=+14.189192764" Dec 13 01:07:55.126522 containerd[1461]: time="2024-12-13T01:07:55.126484473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-h4f9p,Uid:e68ce6b2-bdc9-46cd-9d45-514b492bf8ad,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:07:55.153248 containerd[1461]: time="2024-12-13T01:07:55.153152028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:07:55.153248 containerd[1461]: time="2024-12-13T01:07:55.153215457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:07:55.153248 containerd[1461]: time="2024-12-13T01:07:55.153226177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:55.153443 containerd[1461]: time="2024-12-13T01:07:55.153312930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:55.182526 systemd[1]: Started cri-containerd-cfbbf04b7c682b58121341a73dbf3fd33295d8b9efb0d4776afb1418efee3bf8.scope - libcontainer container cfbbf04b7c682b58121341a73dbf3fd33295d8b9efb0d4776afb1418efee3bf8. Dec 13 01:07:55.217890 containerd[1461]: time="2024-12-13T01:07:55.217844709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-h4f9p,Uid:e68ce6b2-bdc9-46cd-9d45-514b492bf8ad,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cfbbf04b7c682b58121341a73dbf3fd33295d8b9efb0d4776afb1418efee3bf8\"" Dec 13 01:07:55.219424 containerd[1461]: time="2024-12-13T01:07:55.219386286Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:08:01.360025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1262745089.mount: Deactivated successfully. Dec 13 01:08:01.672342 containerd[1461]: time="2024-12-13T01:08:01.672212467Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:01.673017 containerd[1461]: time="2024-12-13T01:08:01.672966112Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764321" Dec 13 01:08:01.674168 containerd[1461]: time="2024-12-13T01:08:01.674130288Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:01.676290 containerd[1461]: time="2024-12-13T01:08:01.676259447Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:01.677010 containerd[1461]: time="2024-12-13T01:08:01.676966125Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 6.457457878s" Dec 13 01:08:01.677010 containerd[1461]: time="2024-12-13T01:08:01.677008634Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:08:01.678561 containerd[1461]: time="2024-12-13T01:08:01.678536013Z" level=info msg="CreateContainer within sandbox \"cfbbf04b7c682b58121341a73dbf3fd33295d8b9efb0d4776afb1418efee3bf8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:08:01.688680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644843759.mount: Deactivated successfully. Dec 13 01:08:01.690181 containerd[1461]: time="2024-12-13T01:08:01.690145204Z" level=info msg="CreateContainer within sandbox \"cfbbf04b7c682b58121341a73dbf3fd33295d8b9efb0d4776afb1418efee3bf8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c07752b1ce072051e3b723a79b858e3e4cdd8c35d25d69324763db72e4f1148e\"" Dec 13 01:08:01.690555 containerd[1461]: time="2024-12-13T01:08:01.690514999Z" level=info msg="StartContainer for \"c07752b1ce072051e3b723a79b858e3e4cdd8c35d25d69324763db72e4f1148e\"" Dec 13 01:08:01.718523 systemd[1]: Started cri-containerd-c07752b1ce072051e3b723a79b858e3e4cdd8c35d25d69324763db72e4f1148e.scope - libcontainer container c07752b1ce072051e3b723a79b858e3e4cdd8c35d25d69324763db72e4f1148e. Dec 13 01:08:01.746478 containerd[1461]: time="2024-12-13T01:08:01.746436489Z" level=info msg="StartContainer for \"c07752b1ce072051e3b723a79b858e3e4cdd8c35d25d69324763db72e4f1148e\" returns successfully" Dec 13 01:08:02.017236 kubelet[2595]: I1213 01:08:02.016596 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-h4f9p" podStartSLOduration=1.5582026249999998 podStartE2EDuration="8.016544855s" podCreationTimestamp="2024-12-13 01:07:54 +0000 UTC" firstStartedPulling="2024-12-13 01:07:55.218907376 +0000 UTC m=+14.378175504" lastFinishedPulling="2024-12-13 01:08:01.677249606 +0000 UTC m=+20.836517734" observedRunningTime="2024-12-13 01:08:02.016429438 +0000 UTC m=+21.175697566" watchObservedRunningTime="2024-12-13 01:08:02.016544855 +0000 UTC m=+21.175812983" Dec 13 01:08:04.590090 kubelet[2595]: I1213 01:08:04.590041 2595 topology_manager.go:215] "Topology Admit Handler" podUID="238c1614-b9c8-49d1-89ea-d9a58cfbaa30" podNamespace="calico-system" podName="calico-typha-559c986bdb-hhjwz" Dec 13 01:08:04.600502 systemd[1]: Created slice kubepods-besteffort-pod238c1614_b9c8_49d1_89ea_d9a58cfbaa30.slice - libcontainer container kubepods-besteffort-pod238c1614_b9c8_49d1_89ea_d9a58cfbaa30.slice. Dec 13 01:08:04.657346 kubelet[2595]: I1213 01:08:04.657307 2595 topology_manager.go:215] "Topology Admit Handler" podUID="bb11860f-6ee6-4395-981e-9c9a23d88ee9" podNamespace="calico-system" podName="calico-node-47gch" Dec 13 01:08:04.666985 systemd[1]: Created slice kubepods-besteffort-podbb11860f_6ee6_4395_981e_9c9a23d88ee9.slice - libcontainer container kubepods-besteffort-podbb11860f_6ee6_4395_981e_9c9a23d88ee9.slice. Dec 13 01:08:04.767247 kubelet[2595]: I1213 01:08:04.766695 2595 topology_manager.go:215] "Topology Admit Handler" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" podNamespace="calico-system" podName="csi-node-driver-zplvr" Dec 13 01:08:04.767247 kubelet[2595]: E1213 01:08:04.767026 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:04.773338 kubelet[2595]: I1213 01:08:04.773273 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238c1614-b9c8-49d1-89ea-d9a58cfbaa30-tigera-ca-bundle\") pod \"calico-typha-559c986bdb-hhjwz\" (UID: \"238c1614-b9c8-49d1-89ea-d9a58cfbaa30\") " pod="calico-system/calico-typha-559c986bdb-hhjwz" Dec 13 01:08:04.773338 kubelet[2595]: I1213 01:08:04.773321 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-policysync\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773338 kubelet[2595]: I1213 01:08:04.773346 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-var-lib-calico\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773525 kubelet[2595]: I1213 01:08:04.773469 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-cni-log-dir\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773525 kubelet[2595]: I1213 01:08:04.773517 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-flexvol-driver-host\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773610 kubelet[2595]: I1213 01:08:04.773553 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9zm\" (UniqueName: \"kubernetes.io/projected/bb11860f-6ee6-4395-981e-9c9a23d88ee9-kube-api-access-pt9zm\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773610 kubelet[2595]: I1213 01:08:04.773585 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb11860f-6ee6-4395-981e-9c9a23d88ee9-node-certs\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773610 kubelet[2595]: I1213 01:08:04.773603 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-var-run-calico\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773719 kubelet[2595]: I1213 01:08:04.773623 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2wf7\" (UniqueName: \"kubernetes.io/projected/53ce73c4-d9e2-4a98-add0-afa55318cf9b-kube-api-access-t2wf7\") pod \"csi-node-driver-zplvr\" (UID: \"53ce73c4-d9e2-4a98-add0-afa55318cf9b\") " pod="calico-system/csi-node-driver-zplvr" Dec 13 01:08:04.773719 kubelet[2595]: I1213 01:08:04.773643 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/238c1614-b9c8-49d1-89ea-d9a58cfbaa30-typha-certs\") pod \"calico-typha-559c986bdb-hhjwz\" (UID: \"238c1614-b9c8-49d1-89ea-d9a58cfbaa30\") " pod="calico-system/calico-typha-559c986bdb-hhjwz" Dec 13 01:08:04.773719 kubelet[2595]: I1213 01:08:04.773667 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-lib-modules\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773719 kubelet[2595]: I1213 01:08:04.773686 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/53ce73c4-d9e2-4a98-add0-afa55318cf9b-varrun\") pod \"csi-node-driver-zplvr\" (UID: \"53ce73c4-d9e2-4a98-add0-afa55318cf9b\") " pod="calico-system/csi-node-driver-zplvr" Dec 13 01:08:04.773719 kubelet[2595]: I1213 01:08:04.773708 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb11860f-6ee6-4395-981e-9c9a23d88ee9-tigera-ca-bundle\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773922 kubelet[2595]: I1213 01:08:04.773725 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ce73c4-d9e2-4a98-add0-afa55318cf9b-kubelet-dir\") pod \"csi-node-driver-zplvr\" (UID: \"53ce73c4-d9e2-4a98-add0-afa55318cf9b\") " pod="calico-system/csi-node-driver-zplvr" Dec 13 01:08:04.773922 kubelet[2595]: I1213 01:08:04.773743 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53ce73c4-d9e2-4a98-add0-afa55318cf9b-socket-dir\") pod \"csi-node-driver-zplvr\" (UID: \"53ce73c4-d9e2-4a98-add0-afa55318cf9b\") " pod="calico-system/csi-node-driver-zplvr" Dec 13 01:08:04.773922 kubelet[2595]: I1213 01:08:04.773761 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53ce73c4-d9e2-4a98-add0-afa55318cf9b-registration-dir\") pod \"csi-node-driver-zplvr\" (UID: \"53ce73c4-d9e2-4a98-add0-afa55318cf9b\") " pod="calico-system/csi-node-driver-zplvr" Dec 13 01:08:04.773922 kubelet[2595]: I1213 01:08:04.773783 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-xtables-lock\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.773922 kubelet[2595]: I1213 01:08:04.773805 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftd7x\" (UniqueName: \"kubernetes.io/projected/238c1614-b9c8-49d1-89ea-d9a58cfbaa30-kube-api-access-ftd7x\") pod \"calico-typha-559c986bdb-hhjwz\" (UID: \"238c1614-b9c8-49d1-89ea-d9a58cfbaa30\") " pod="calico-system/calico-typha-559c986bdb-hhjwz" Dec 13 01:08:04.774084 kubelet[2595]: I1213 01:08:04.773826 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-cni-bin-dir\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.774084 kubelet[2595]: I1213 01:08:04.773845 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb11860f-6ee6-4395-981e-9c9a23d88ee9-cni-net-dir\") pod \"calico-node-47gch\" (UID: \"bb11860f-6ee6-4395-981e-9c9a23d88ee9\") " pod="calico-system/calico-node-47gch" Dec 13 01:08:04.877017 kubelet[2595]: E1213 01:08:04.876957 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.877017 kubelet[2595]: W1213 01:08:04.876991 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.877350 kubelet[2595]: E1213 01:08:04.877232 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.877706 kubelet[2595]: E1213 01:08:04.877579 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.877706 kubelet[2595]: W1213 01:08:04.877593 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.877706 kubelet[2595]: E1213 01:08:04.877613 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.878065 kubelet[2595]: E1213 01:08:04.877968 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.878065 kubelet[2595]: W1213 01:08:04.877982 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.878065 kubelet[2595]: E1213 01:08:04.878012 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.878526 kubelet[2595]: E1213 01:08:04.878423 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.878526 kubelet[2595]: W1213 01:08:04.878436 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.878526 kubelet[2595]: E1213 01:08:04.878458 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.881414 kubelet[2595]: E1213 01:08:04.878841 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.881414 kubelet[2595]: W1213 01:08:04.878854 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.881414 kubelet[2595]: E1213 01:08:04.881058 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.881414 kubelet[2595]: W1213 01:08:04.881076 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.881414 kubelet[2595]: E1213 01:08:04.881228 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.881414 kubelet[2595]: E1213 01:08:04.881270 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.881414 kubelet[2595]: W1213 01:08:04.881278 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.881414 kubelet[2595]: E1213 01:08:04.881296 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.881760 kubelet[2595]: E1213 01:08:04.881484 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.881857 kubelet[2595]: E1213 01:08:04.881820 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.881857 kubelet[2595]: W1213 01:08:04.881836 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.881857 kubelet[2595]: E1213 01:08:04.881848 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.882887 kubelet[2595]: E1213 01:08:04.882680 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.882887 kubelet[2595]: W1213 01:08:04.882694 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.882887 kubelet[2595]: E1213 01:08:04.882712 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.886942 kubelet[2595]: E1213 01:08:04.886885 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.886942 kubelet[2595]: W1213 01:08:04.886903 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.886942 kubelet[2595]: E1213 01:08:04.886921 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.887216 kubelet[2595]: E1213 01:08:04.887123 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.887216 kubelet[2595]: W1213 01:08:04.887131 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.887216 kubelet[2595]: E1213 01:08:04.887153 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.887408 kubelet[2595]: E1213 01:08:04.887347 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.887408 kubelet[2595]: W1213 01:08:04.887359 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.887408 kubelet[2595]: E1213 01:08:04.887377 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.887786 kubelet[2595]: E1213 01:08:04.887600 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.887786 kubelet[2595]: W1213 01:08:04.887611 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.887786 kubelet[2595]: E1213 01:08:04.887621 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.891021 kubelet[2595]: E1213 01:08:04.890998 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:08:04.891156 kubelet[2595]: W1213 01:08:04.891096 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:08:04.891156 kubelet[2595]: E1213 01:08:04.891123 2595 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:08:04.909393 kubelet[2595]: E1213 01:08:04.909343 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:04.910221 containerd[1461]: time="2024-12-13T01:08:04.910160365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-559c986bdb-hhjwz,Uid:238c1614-b9c8-49d1-89ea-d9a58cfbaa30,Namespace:calico-system,Attempt:0,}" Dec 13 01:08:04.971798 kubelet[2595]: E1213 01:08:04.971757 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:04.972514 containerd[1461]: time="2024-12-13T01:08:04.972466736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47gch,Uid:bb11860f-6ee6-4395-981e-9c9a23d88ee9,Namespace:calico-system,Attempt:0,}" Dec 13 01:08:05.089342 containerd[1461]: time="2024-12-13T01:08:05.089145641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:05.089620 containerd[1461]: time="2024-12-13T01:08:05.089451145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:05.089620 containerd[1461]: time="2024-12-13T01:08:05.089515917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:05.089620 containerd[1461]: time="2024-12-13T01:08:05.089531175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:05.089735 containerd[1461]: time="2024-12-13T01:08:05.089635491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:05.090081 containerd[1461]: time="2024-12-13T01:08:05.089859933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:05.090081 containerd[1461]: time="2024-12-13T01:08:05.089878087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:05.090309 containerd[1461]: time="2024-12-13T01:08:05.090129148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:05.107613 systemd[1]: Started cri-containerd-ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd.scope - libcontainer container ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd. Dec 13 01:08:05.111292 systemd[1]: Started cri-containerd-2d6481563084fb88f1fb977e3671690b6e10232f378accae1645f06bc59d0006.scope - libcontainer container 2d6481563084fb88f1fb977e3671690b6e10232f378accae1645f06bc59d0006. Dec 13 01:08:05.134585 containerd[1461]: time="2024-12-13T01:08:05.134375114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47gch,Uid:bb11860f-6ee6-4395-981e-9c9a23d88ee9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd\"" Dec 13 01:08:05.139449 kubelet[2595]: E1213 01:08:05.139412 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:05.146608 containerd[1461]: time="2024-12-13T01:08:05.146548749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:08:05.160811 containerd[1461]: time="2024-12-13T01:08:05.160759158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-559c986bdb-hhjwz,Uid:238c1614-b9c8-49d1-89ea-d9a58cfbaa30,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d6481563084fb88f1fb977e3671690b6e10232f378accae1645f06bc59d0006\"" Dec 13 01:08:05.161665 kubelet[2595]: E1213 01:08:05.161626 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:05.960231 kubelet[2595]: E1213 01:08:05.960178 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:06.423752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682973444.mount: Deactivated successfully. Dec 13 01:08:06.497250 containerd[1461]: time="2024-12-13T01:08:06.497184691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:06.498047 containerd[1461]: time="2024-12-13T01:08:06.497966217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 01:08:06.499135 containerd[1461]: time="2024-12-13T01:08:06.499101458Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:06.501101 containerd[1461]: time="2024-12-13T01:08:06.501066498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:06.501629 containerd[1461]: time="2024-12-13T01:08:06.501590611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.35498222s" Dec 13 01:08:06.501661 containerd[1461]: time="2024-12-13T01:08:06.501630446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:08:06.506358 containerd[1461]: time="2024-12-13T01:08:06.506322094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:08:06.511741 containerd[1461]: time="2024-12-13T01:08:06.511684911Z" level=info msg="CreateContainer within sandbox \"ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:08:06.528617 containerd[1461]: time="2024-12-13T01:08:06.528564178Z" level=info msg="CreateContainer within sandbox \"ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a684d0a4a55919dfb741fe76e9ec3fa11880a11ac441682086a21a2b6324b0fd\"" Dec 13 01:08:06.529765 containerd[1461]: time="2024-12-13T01:08:06.529715570Z" level=info msg="StartContainer for \"a684d0a4a55919dfb741fe76e9ec3fa11880a11ac441682086a21a2b6324b0fd\"" Dec 13 01:08:06.563665 systemd[1]: Started cri-containerd-a684d0a4a55919dfb741fe76e9ec3fa11880a11ac441682086a21a2b6324b0fd.scope - libcontainer container a684d0a4a55919dfb741fe76e9ec3fa11880a11ac441682086a21a2b6324b0fd. Dec 13 01:08:06.606781 systemd[1]: cri-containerd-a684d0a4a55919dfb741fe76e9ec3fa11880a11ac441682086a21a2b6324b0fd.scope: Deactivated successfully. Dec 13 01:08:06.617374 containerd[1461]: time="2024-12-13T01:08:06.617319776Z" level=info msg="StartContainer for \"a684d0a4a55919dfb741fe76e9ec3fa11880a11ac441682086a21a2b6324b0fd\" returns successfully" Dec 13 01:08:06.659891 containerd[1461]: time="2024-12-13T01:08:06.657584111Z" level=info msg="shim disconnected" id=a684d0a4a55919dfb741fe76e9ec3fa11880a11ac441682086a21a2b6324b0fd namespace=k8s.io Dec 13 01:08:06.659891 containerd[1461]: time="2024-12-13T01:08:06.659883729Z" level=warning msg="cleaning up after shim disconnected" id=a684d0a4a55919dfb741fe76e9ec3fa11880a11ac441682086a21a2b6324b0fd namespace=k8s.io Dec 13 01:08:06.659891 containerd[1461]: time="2024-12-13T01:08:06.659897795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:08:07.021732 kubelet[2595]: E1213 01:08:07.021696 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:07.956584 kubelet[2595]: E1213 01:08:07.956529 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:09.062437 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:38868.service - OpenSSH per-connection server daemon (10.0.0.1:38868). Dec 13 01:08:09.101381 sshd[3173]: Accepted publickey for core from 10.0.0.1 port 38868 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:09.103349 sshd[3173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:09.111564 systemd-logind[1444]: New session 10 of user core. Dec 13 01:08:09.117833 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:08:09.242780 sshd[3173]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:09.246536 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:38868.service: Deactivated successfully. Dec 13 01:08:09.248837 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:08:09.249412 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:08:09.250205 systemd-logind[1444]: Removed session 10. Dec 13 01:08:09.955906 kubelet[2595]: E1213 01:08:09.955856 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:10.604524 containerd[1461]: time="2024-12-13T01:08:10.604432023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:10.605415 containerd[1461]: time="2024-12-13T01:08:10.605352079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 01:08:10.606778 containerd[1461]: time="2024-12-13T01:08:10.606745685Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:10.609205 containerd[1461]: time="2024-12-13T01:08:10.609133216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:10.609676 containerd[1461]: time="2024-12-13T01:08:10.609637172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 4.103278961s" Dec 13 01:08:10.609676 containerd[1461]: time="2024-12-13T01:08:10.609668711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:08:10.610897 containerd[1461]: time="2024-12-13T01:08:10.610525880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:08:10.624702 containerd[1461]: time="2024-12-13T01:08:10.624645220Z" level=info msg="CreateContainer within sandbox \"2d6481563084fb88f1fb977e3671690b6e10232f378accae1645f06bc59d0006\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:08:10.936712 containerd[1461]: time="2024-12-13T01:08:10.936556443Z" level=info msg="CreateContainer within sandbox \"2d6481563084fb88f1fb977e3671690b6e10232f378accae1645f06bc59d0006\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"310f33608c53cdc7ff68a16ab14a213cb27f63f6335bf10815182bf45ed7ea7a\"" Dec 13 01:08:10.937384 containerd[1461]: time="2024-12-13T01:08:10.937036584Z" level=info msg="StartContainer for \"310f33608c53cdc7ff68a16ab14a213cb27f63f6335bf10815182bf45ed7ea7a\"" Dec 13 01:08:10.962560 systemd[1]: Started cri-containerd-310f33608c53cdc7ff68a16ab14a213cb27f63f6335bf10815182bf45ed7ea7a.scope - libcontainer container 310f33608c53cdc7ff68a16ab14a213cb27f63f6335bf10815182bf45ed7ea7a. Dec 13 01:08:11.034574 containerd[1461]: time="2024-12-13T01:08:11.034517115Z" level=info msg="StartContainer for \"310f33608c53cdc7ff68a16ab14a213cb27f63f6335bf10815182bf45ed7ea7a\" returns successfully" Dec 13 01:08:11.037564 kubelet[2595]: E1213 01:08:11.037543 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:11.955548 kubelet[2595]: E1213 01:08:11.955499 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:12.038598 kubelet[2595]: I1213 01:08:12.038570 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:08:12.039189 kubelet[2595]: E1213 01:08:12.039174 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:13.955713 kubelet[2595]: E1213 01:08:13.955656 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:14.254860 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:38882.service - OpenSSH per-connection server daemon (10.0.0.1:38882). Dec 13 01:08:14.291899 sshd[3232]: Accepted publickey for core from 10.0.0.1 port 38882 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:14.293530 sshd[3232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:14.297799 systemd-logind[1444]: New session 11 of user core. Dec 13 01:08:14.315546 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:08:14.424491 sshd[3232]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:14.428803 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:38882.service: Deactivated successfully. Dec 13 01:08:14.430869 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:08:14.431596 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:08:14.432480 systemd-logind[1444]: Removed session 11. Dec 13 01:08:15.956050 kubelet[2595]: E1213 01:08:15.955988 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:17.167972 containerd[1461]: time="2024-12-13T01:08:17.167918576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:17.175092 containerd[1461]: time="2024-12-13T01:08:17.175012046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:08:17.189575 containerd[1461]: time="2024-12-13T01:08:17.189530207Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:17.213689 containerd[1461]: time="2024-12-13T01:08:17.213617270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:17.215499 containerd[1461]: time="2024-12-13T01:08:17.215442786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.604856934s" Dec 13 01:08:17.215559 containerd[1461]: time="2024-12-13T01:08:17.215498040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:08:17.218701 containerd[1461]: time="2024-12-13T01:08:17.218571036Z" level=info msg="CreateContainer within sandbox \"ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:08:17.578375 containerd[1461]: time="2024-12-13T01:08:17.578241102Z" level=info msg="CreateContainer within sandbox \"ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820\"" Dec 13 01:08:17.578859 containerd[1461]: time="2024-12-13T01:08:17.578758482Z" level=info msg="StartContainer for \"f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820\"" Dec 13 01:08:17.614592 systemd[1]: Started cri-containerd-f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820.scope - libcontainer container f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820. Dec 13 01:08:17.855031 containerd[1461]: time="2024-12-13T01:08:17.854933728Z" level=info msg="StartContainer for \"f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820\" returns successfully" Dec 13 01:08:17.956530 kubelet[2595]: E1213 01:08:17.956485 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:18.049275 kubelet[2595]: E1213 01:08:18.049221 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:18.063862 kubelet[2595]: I1213 01:08:18.063817 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-559c986bdb-hhjwz" podStartSLOduration=8.616151803 podStartE2EDuration="14.063767788s" podCreationTimestamp="2024-12-13 01:08:04 +0000 UTC" firstStartedPulling="2024-12-13 01:08:05.162457536 +0000 UTC m=+24.321725664" lastFinishedPulling="2024-12-13 01:08:10.610073521 +0000 UTC m=+29.769341649" observedRunningTime="2024-12-13 01:08:11.107624318 +0000 UTC m=+30.266892436" watchObservedRunningTime="2024-12-13 01:08:18.063767788 +0000 UTC m=+37.223035916" Dec 13 01:08:19.051152 kubelet[2595]: E1213 01:08:19.051094 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:19.161592 containerd[1461]: time="2024-12-13T01:08:19.161459456Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:08:19.164024 kubelet[2595]: I1213 01:08:19.163997 2595 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:08:19.165410 systemd[1]: cri-containerd-f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820.scope: Deactivated successfully. Dec 13 01:08:19.188451 kubelet[2595]: I1213 01:08:19.188012 2595 topology_manager.go:215] "Topology Admit Handler" podUID="6cab1421-2490-4e1e-a106-3059cdb91580" podNamespace="kube-system" podName="coredns-76f75df574-57lkc" Dec 13 01:08:19.189790 kubelet[2595]: I1213 01:08:19.189757 2595 topology_manager.go:215] "Topology Admit Handler" podUID="c89f9093-1b31-49a1-b329-531dacccd48c" podNamespace="kube-system" podName="coredns-76f75df574-gcxjc" Dec 13 01:08:19.191002 kubelet[2595]: I1213 01:08:19.190961 2595 topology_manager.go:215] "Topology Admit Handler" podUID="56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2" podNamespace="calico-apiserver" podName="calico-apiserver-79f8c64c55-j6szh" Dec 13 01:08:19.191181 kubelet[2595]: I1213 01:08:19.191070 2595 topology_manager.go:215] "Topology Admit Handler" podUID="048e2ce0-7d8d-4a74-8789-6bbabf5e378c" podNamespace="calico-apiserver" podName="calico-apiserver-79f8c64c55-jqxwq" Dec 13 01:08:19.194204 kubelet[2595]: I1213 01:08:19.193971 2595 topology_manager.go:215] "Topology Admit Handler" podUID="8ea07a70-019b-41be-b5b8-8680d6837b86" podNamespace="calico-system" podName="calico-kube-controllers-dd4b76b86-ckqv7" Dec 13 01:08:19.202210 systemd[1]: Created slice kubepods-burstable-pod6cab1421_2490_4e1e_a106_3059cdb91580.slice - libcontainer container kubepods-burstable-pod6cab1421_2490_4e1e_a106_3059cdb91580.slice. Dec 13 01:08:19.204821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820-rootfs.mount: Deactivated successfully. Dec 13 01:08:19.217813 systemd[1]: Created slice kubepods-burstable-podc89f9093_1b31_49a1_b329_531dacccd48c.slice - libcontainer container kubepods-burstable-podc89f9093_1b31_49a1_b329_531dacccd48c.slice. Dec 13 01:08:19.225456 systemd[1]: Created slice kubepods-besteffort-pod048e2ce0_7d8d_4a74_8789_6bbabf5e378c.slice - libcontainer container kubepods-besteffort-pod048e2ce0_7d8d_4a74_8789_6bbabf5e378c.slice. Dec 13 01:08:19.231985 systemd[1]: Created slice kubepods-besteffort-pod56f972bd_9655_41a5_b4f6_4b6ac7ebcdc2.slice - libcontainer container kubepods-besteffort-pod56f972bd_9655_41a5_b4f6_4b6ac7ebcdc2.slice. Dec 13 01:08:19.237349 systemd[1]: Created slice kubepods-besteffort-pod8ea07a70_019b_41be_b5b8_8680d6837b86.slice - libcontainer container kubepods-besteffort-pod8ea07a70_019b_41be_b5b8_8680d6837b86.slice. Dec 13 01:08:19.371945 kubelet[2595]: I1213 01:08:19.371893 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2-calico-apiserver-certs\") pod \"calico-apiserver-79f8c64c55-j6szh\" (UID: \"56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2\") " pod="calico-apiserver/calico-apiserver-79f8c64c55-j6szh" Dec 13 01:08:19.371945 kubelet[2595]: I1213 01:08:19.371947 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/048e2ce0-7d8d-4a74-8789-6bbabf5e378c-calico-apiserver-certs\") pod \"calico-apiserver-79f8c64c55-jqxwq\" (UID: \"048e2ce0-7d8d-4a74-8789-6bbabf5e378c\") " pod="calico-apiserver/calico-apiserver-79f8c64c55-jqxwq" Dec 13 01:08:19.372366 kubelet[2595]: I1213 01:08:19.371979 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq5gc\" (UniqueName: \"kubernetes.io/projected/8ea07a70-019b-41be-b5b8-8680d6837b86-kube-api-access-kq5gc\") pod \"calico-kube-controllers-dd4b76b86-ckqv7\" (UID: \"8ea07a70-019b-41be-b5b8-8680d6837b86\") " pod="calico-system/calico-kube-controllers-dd4b76b86-ckqv7" Dec 13 01:08:19.372366 kubelet[2595]: I1213 01:08:19.372006 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrkw\" (UniqueName: \"kubernetes.io/projected/c89f9093-1b31-49a1-b329-531dacccd48c-kube-api-access-vsrkw\") pod \"coredns-76f75df574-gcxjc\" (UID: \"c89f9093-1b31-49a1-b329-531dacccd48c\") " pod="kube-system/coredns-76f75df574-gcxjc" Dec 13 01:08:19.372366 kubelet[2595]: I1213 01:08:19.372032 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c89f9093-1b31-49a1-b329-531dacccd48c-config-volume\") pod \"coredns-76f75df574-gcxjc\" (UID: \"c89f9093-1b31-49a1-b329-531dacccd48c\") " pod="kube-system/coredns-76f75df574-gcxjc" Dec 13 01:08:19.372366 kubelet[2595]: I1213 01:08:19.372056 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzfxm\" (UniqueName: \"kubernetes.io/projected/048e2ce0-7d8d-4a74-8789-6bbabf5e378c-kube-api-access-hzfxm\") pod \"calico-apiserver-79f8c64c55-jqxwq\" (UID: \"048e2ce0-7d8d-4a74-8789-6bbabf5e378c\") " pod="calico-apiserver/calico-apiserver-79f8c64c55-jqxwq" Dec 13 01:08:19.372366 kubelet[2595]: I1213 01:08:19.372078 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-829vr\" (UniqueName: \"kubernetes.io/projected/6cab1421-2490-4e1e-a106-3059cdb91580-kube-api-access-829vr\") pod \"coredns-76f75df574-57lkc\" (UID: \"6cab1421-2490-4e1e-a106-3059cdb91580\") " pod="kube-system/coredns-76f75df574-57lkc" Dec 13 01:08:19.372641 kubelet[2595]: I1213 01:08:19.372097 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea07a70-019b-41be-b5b8-8680d6837b86-tigera-ca-bundle\") pod \"calico-kube-controllers-dd4b76b86-ckqv7\" (UID: \"8ea07a70-019b-41be-b5b8-8680d6837b86\") " pod="calico-system/calico-kube-controllers-dd4b76b86-ckqv7" Dec 13 01:08:19.372641 kubelet[2595]: I1213 01:08:19.372116 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cab1421-2490-4e1e-a106-3059cdb91580-config-volume\") pod \"coredns-76f75df574-57lkc\" (UID: \"6cab1421-2490-4e1e-a106-3059cdb91580\") " pod="kube-system/coredns-76f75df574-57lkc" Dec 13 01:08:19.372641 kubelet[2595]: I1213 01:08:19.372135 2595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgb9l\" (UniqueName: \"kubernetes.io/projected/56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2-kube-api-access-kgb9l\") pod \"calico-apiserver-79f8c64c55-j6szh\" (UID: \"56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2\") " pod="calico-apiserver/calico-apiserver-79f8c64c55-j6szh" Dec 13 01:08:19.374792 containerd[1461]: time="2024-12-13T01:08:19.374646853Z" level=info msg="shim disconnected" id=f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820 namespace=k8s.io Dec 13 01:08:19.374792 containerd[1461]: time="2024-12-13T01:08:19.374720140Z" level=warning msg="cleaning up after shim disconnected" id=f96b806b9e1787be50d804e5ad86796392fe6d80052c6c2e0f8418b8cb76d820 namespace=k8s.io Dec 13 01:08:19.374792 containerd[1461]: time="2024-12-13T01:08:19.374741600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:08:19.435528 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:33966.service - OpenSSH per-connection server daemon (10.0.0.1:33966). Dec 13 01:08:19.476685 sshd[3314]: Accepted publickey for core from 10.0.0.1 port 33966 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:19.478953 sshd[3314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:19.489810 systemd-logind[1444]: New session 12 of user core. Dec 13 01:08:19.494622 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:08:19.520728 kubelet[2595]: E1213 01:08:19.520698 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:19.521652 containerd[1461]: time="2024-12-13T01:08:19.521253133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57lkc,Uid:6cab1421-2490-4e1e-a106-3059cdb91580,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:19.521712 kubelet[2595]: E1213 01:08:19.521549 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:19.522343 containerd[1461]: time="2024-12-13T01:08:19.522189440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gcxjc,Uid:c89f9093-1b31-49a1-b329-531dacccd48c,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:19.528788 containerd[1461]: time="2024-12-13T01:08:19.528755338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f8c64c55-jqxwq,Uid:048e2ce0-7d8d-4a74-8789-6bbabf5e378c,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:08:19.535420 containerd[1461]: time="2024-12-13T01:08:19.535363136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f8c64c55-j6szh,Uid:56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:08:19.539850 containerd[1461]: time="2024-12-13T01:08:19.539818584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd4b76b86-ckqv7,Uid:8ea07a70-019b-41be-b5b8-8680d6837b86,Namespace:calico-system,Attempt:0,}" Dec 13 01:08:19.644681 sshd[3314]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:19.656112 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:33966.service: Deactivated successfully. Dec 13 01:08:19.657794 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:08:19.659512 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:08:19.669632 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:33972.service - OpenSSH per-connection server daemon (10.0.0.1:33972). Dec 13 01:08:19.670502 systemd-logind[1444]: Removed session 12. Dec 13 01:08:19.699760 sshd[3339]: Accepted publickey for core from 10.0.0.1 port 33972 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:19.701345 sshd[3339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:19.705499 systemd-logind[1444]: New session 13 of user core. Dec 13 01:08:19.715535 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:08:19.871292 sshd[3339]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:19.880274 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:33972.service: Deactivated successfully. Dec 13 01:08:19.882293 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:08:19.886797 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:08:19.895732 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:33974.service - OpenSSH per-connection server daemon (10.0.0.1:33974). Dec 13 01:08:19.903165 systemd-logind[1444]: Removed session 13. Dec 13 01:08:19.940499 sshd[3351]: Accepted publickey for core from 10.0.0.1 port 33974 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:19.943912 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:19.952488 systemd-logind[1444]: New session 14 of user core. Dec 13 01:08:19.955546 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:08:19.965866 systemd[1]: Created slice kubepods-besteffort-pod53ce73c4_d9e2_4a98_add0_afa55318cf9b.slice - libcontainer container kubepods-besteffort-pod53ce73c4_d9e2_4a98_add0_afa55318cf9b.slice. Dec 13 01:08:19.969624 containerd[1461]: time="2024-12-13T01:08:19.969576427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zplvr,Uid:53ce73c4-d9e2-4a98-add0-afa55318cf9b,Namespace:calico-system,Attempt:0,}" Dec 13 01:08:20.020741 containerd[1461]: time="2024-12-13T01:08:20.020623249Z" level=error msg="Failed to destroy network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.021372 containerd[1461]: time="2024-12-13T01:08:20.021314005Z" level=error msg="encountered an error cleaning up failed sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.021491 containerd[1461]: time="2024-12-13T01:08:20.021470489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f8c64c55-jqxwq,Uid:048e2ce0-7d8d-4a74-8789-6bbabf5e378c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.022225 kubelet[2595]: E1213 01:08:20.021844 2595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.022225 kubelet[2595]: E1213 01:08:20.021909 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f8c64c55-jqxwq" Dec 13 01:08:20.022225 kubelet[2595]: E1213 01:08:20.021930 2595 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f8c64c55-jqxwq" Dec 13 01:08:20.022377 kubelet[2595]: E1213 01:08:20.021984 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f8c64c55-jqxwq_calico-apiserver(048e2ce0-7d8d-4a74-8789-6bbabf5e378c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f8c64c55-jqxwq_calico-apiserver(048e2ce0-7d8d-4a74-8789-6bbabf5e378c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f8c64c55-jqxwq" podUID="048e2ce0-7d8d-4a74-8789-6bbabf5e378c" Dec 13 01:08:20.028346 containerd[1461]: time="2024-12-13T01:08:20.028285094Z" level=error msg="Failed to destroy network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.033062 containerd[1461]: time="2024-12-13T01:08:20.032993007Z" level=error msg="encountered an error cleaning up failed sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.033146 containerd[1461]: time="2024-12-13T01:08:20.033104015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57lkc,Uid:6cab1421-2490-4e1e-a106-3059cdb91580,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.033576 kubelet[2595]: E1213 01:08:20.033348 2595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.033576 kubelet[2595]: E1213 01:08:20.033466 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57lkc" Dec 13 01:08:20.033576 kubelet[2595]: E1213 01:08:20.033505 2595 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-57lkc" Dec 13 01:08:20.033874 kubelet[2595]: E1213 01:08:20.033860 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-57lkc_kube-system(6cab1421-2490-4e1e-a106-3059cdb91580)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-57lkc_kube-system(6cab1421-2490-4e1e-a106-3059cdb91580)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-57lkc" podUID="6cab1421-2490-4e1e-a106-3059cdb91580" Dec 13 01:08:20.036595 containerd[1461]: time="2024-12-13T01:08:20.036542316Z" level=error msg="Failed to destroy network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.037013 containerd[1461]: time="2024-12-13T01:08:20.036981079Z" level=error msg="encountered an error cleaning up failed sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.037052 containerd[1461]: time="2024-12-13T01:08:20.037037495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gcxjc,Uid:c89f9093-1b31-49a1-b329-531dacccd48c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.037267 kubelet[2595]: E1213 01:08:20.037250 2595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.037679 kubelet[2595]: E1213 01:08:20.037465 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gcxjc" Dec 13 01:08:20.037679 kubelet[2595]: E1213 01:08:20.037587 2595 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gcxjc" Dec 13 01:08:20.038455 kubelet[2595]: E1213 01:08:20.037932 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gcxjc_kube-system(c89f9093-1b31-49a1-b329-531dacccd48c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gcxjc_kube-system(c89f9093-1b31-49a1-b329-531dacccd48c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gcxjc" podUID="c89f9093-1b31-49a1-b329-531dacccd48c" Dec 13 01:08:20.041999 containerd[1461]: time="2024-12-13T01:08:20.041849613Z" level=error msg="Failed to destroy network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.042462 containerd[1461]: time="2024-12-13T01:08:20.042420604Z" level=error msg="encountered an error cleaning up failed sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.042672 containerd[1461]: time="2024-12-13T01:08:20.042639214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f8c64c55-j6szh,Uid:56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.043447 kubelet[2595]: E1213 01:08:20.042971 2595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.043447 kubelet[2595]: E1213 01:08:20.043029 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f8c64c55-j6szh" Dec 13 01:08:20.043447 kubelet[2595]: E1213 01:08:20.043053 2595 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f8c64c55-j6szh" Dec 13 01:08:20.043620 kubelet[2595]: E1213 01:08:20.043114 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f8c64c55-j6szh_calico-apiserver(56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f8c64c55-j6szh_calico-apiserver(56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f8c64c55-j6szh" podUID="56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2" Dec 13 01:08:20.054357 kubelet[2595]: I1213 01:08:20.054316 2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:20.055584 containerd[1461]: time="2024-12-13T01:08:20.055555537Z" level=info msg="StopPodSandbox for \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\"" Dec 13 01:08:20.055842 containerd[1461]: time="2024-12-13T01:08:20.055805165Z" level=info msg="Ensure that sandbox 13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af in task-service has been cleanup successfully" Dec 13 01:08:20.057737 kubelet[2595]: I1213 01:08:20.057706 2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:20.058218 containerd[1461]: time="2024-12-13T01:08:20.058195640Z" level=info msg="StopPodSandbox for \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\"" Dec 13 01:08:20.058380 containerd[1461]: time="2024-12-13T01:08:20.058361290Z" level=info msg="Ensure that sandbox 9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76 in task-service has been cleanup successfully" Dec 13 01:08:20.059835 kubelet[2595]: I1213 01:08:20.059817 2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:20.062609 containerd[1461]: time="2024-12-13T01:08:20.062549519Z" level=info msg="StopPodSandbox for \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\"" Dec 13 01:08:20.063155 containerd[1461]: time="2024-12-13T01:08:20.063070656Z" level=info msg="Ensure that sandbox 93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e in task-service has been cleanup successfully" Dec 13 01:08:20.068601 kubelet[2595]: E1213 01:08:20.068569 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:20.076873 containerd[1461]: time="2024-12-13T01:08:20.076830251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:08:20.077204 kubelet[2595]: I1213 01:08:20.077185 2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:20.078617 containerd[1461]: time="2024-12-13T01:08:20.078463315Z" level=error msg="Failed to destroy network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.079353 containerd[1461]: time="2024-12-13T01:08:20.079295907Z" level=info msg="StopPodSandbox for \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\"" Dec 13 01:08:20.079807 containerd[1461]: time="2024-12-13T01:08:20.079722928Z" level=info msg="Ensure that sandbox d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9 in task-service has been cleanup successfully" Dec 13 01:08:20.085876 containerd[1461]: time="2024-12-13T01:08:20.085776726Z" level=error msg="encountered an error cleaning up failed sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.086923 containerd[1461]: time="2024-12-13T01:08:20.086641949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd4b76b86-ckqv7,Uid:8ea07a70-019b-41be-b5b8-8680d6837b86,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.089649 kubelet[2595]: E1213 01:08:20.088861 2595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.089649 kubelet[2595]: E1213 01:08:20.088931 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dd4b76b86-ckqv7" Dec 13 01:08:20.089649 kubelet[2595]: E1213 01:08:20.088958 2595 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dd4b76b86-ckqv7" Dec 13 01:08:20.089876 kubelet[2595]: E1213 01:08:20.089023 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dd4b76b86-ckqv7_calico-system(8ea07a70-019b-41be-b5b8-8680d6837b86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dd4b76b86-ckqv7_calico-system(8ea07a70-019b-41be-b5b8-8680d6837b86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dd4b76b86-ckqv7" podUID="8ea07a70-019b-41be-b5b8-8680d6837b86" Dec 13 01:08:20.100780 sshd[3351]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:20.107298 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:33974.service: Deactivated successfully. Dec 13 01:08:20.111822 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:08:20.113329 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:08:20.115055 systemd-logind[1444]: Removed session 14. Dec 13 01:08:20.123865 containerd[1461]: time="2024-12-13T01:08:20.123808568Z" level=error msg="StopPodSandbox for \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\" failed" error="failed to destroy network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.124144 kubelet[2595]: E1213 01:08:20.124115 2595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:20.124233 kubelet[2595]: E1213 01:08:20.124207 2595 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e"} Dec 13 01:08:20.124280 kubelet[2595]: E1213 01:08:20.124255 2595 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6cab1421-2490-4e1e-a106-3059cdb91580\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:08:20.124488 kubelet[2595]: E1213 01:08:20.124286 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6cab1421-2490-4e1e-a106-3059cdb91580\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-57lkc" podUID="6cab1421-2490-4e1e-a106-3059cdb91580" Dec 13 01:08:20.127887 containerd[1461]: time="2024-12-13T01:08:20.127798183Z" level=error msg="StopPodSandbox for \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\" failed" error="failed to destroy network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.128220 kubelet[2595]: E1213 01:08:20.128200 2595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:20.128502 kubelet[2595]: E1213 01:08:20.128353 2595 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af"} Dec 13 01:08:20.128502 kubelet[2595]: E1213 01:08:20.128388 2595 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"048e2ce0-7d8d-4a74-8789-6bbabf5e378c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:08:20.128502 kubelet[2595]: E1213 01:08:20.128481 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"048e2ce0-7d8d-4a74-8789-6bbabf5e378c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f8c64c55-jqxwq" podUID="048e2ce0-7d8d-4a74-8789-6bbabf5e378c" Dec 13 01:08:20.138538 containerd[1461]: time="2024-12-13T01:08:20.138477489Z" level=error msg="StopPodSandbox for \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\" failed" error="failed to destroy network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.138856 kubelet[2595]: E1213 01:08:20.138802 2595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:20.138974 kubelet[2595]: E1213 01:08:20.138870 2595 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9"} Dec 13 01:08:20.138974 kubelet[2595]: E1213 01:08:20.138918 2595 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:08:20.138974 kubelet[2595]: E1213 01:08:20.138958 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f8c64c55-j6szh" podUID="56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2" Dec 13 01:08:20.139285 containerd[1461]: time="2024-12-13T01:08:20.139055814Z" level=error msg="Failed to destroy network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.139888 containerd[1461]: time="2024-12-13T01:08:20.139840486Z" level=error msg="encountered an error cleaning up failed sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.140008 containerd[1461]: time="2024-12-13T01:08:20.139978875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zplvr,Uid:53ce73c4-d9e2-4a98-add0-afa55318cf9b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.140309 kubelet[2595]: E1213 01:08:20.140281 2595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.140379 kubelet[2595]: E1213 01:08:20.140365 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zplvr" Dec 13 01:08:20.140450 kubelet[2595]: E1213 01:08:20.140407 2595 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zplvr" Dec 13 01:08:20.140504 kubelet[2595]: E1213 01:08:20.140493 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zplvr_calico-system(53ce73c4-d9e2-4a98-add0-afa55318cf9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zplvr_calico-system(53ce73c4-d9e2-4a98-add0-afa55318cf9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:20.146547 containerd[1461]: time="2024-12-13T01:08:20.146415421Z" level=error msg="StopPodSandbox for \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\" failed" error="failed to destroy network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:20.146788 kubelet[2595]: E1213 01:08:20.146746 2595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:20.147028 kubelet[2595]: E1213 01:08:20.146999 2595 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76"} Dec 13 01:08:20.147135 kubelet[2595]: E1213 01:08:20.147050 2595 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c89f9093-1b31-49a1-b329-531dacccd48c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:08:20.147135 kubelet[2595]: E1213 01:08:20.147090 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c89f9093-1b31-49a1-b329-531dacccd48c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gcxjc" podUID="c89f9093-1b31-49a1-b329-531dacccd48c" Dec 13 01:08:20.207223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76-shm.mount: Deactivated successfully. Dec 13 01:08:21.080045 kubelet[2595]: I1213 01:08:21.079993 2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:21.080642 containerd[1461]: time="2024-12-13T01:08:21.080603142Z" level=info msg="StopPodSandbox for \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\"" Dec 13 01:08:21.080908 kubelet[2595]: I1213 01:08:21.080726 2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:21.081114 containerd[1461]: time="2024-12-13T01:08:21.081085778Z" level=info msg="Ensure that sandbox 1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2 in task-service has been cleanup successfully" Dec 13 01:08:21.082114 containerd[1461]: time="2024-12-13T01:08:21.082071577Z" level=info msg="StopPodSandbox for \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\"" Dec 13 01:08:21.082338 containerd[1461]: time="2024-12-13T01:08:21.082313081Z" level=info msg="Ensure that sandbox 0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb in task-service has been cleanup successfully" Dec 13 01:08:21.111744 containerd[1461]: time="2024-12-13T01:08:21.111685355Z" level=error msg="StopPodSandbox for \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\" failed" error="failed to destroy network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:21.112003 kubelet[2595]: E1213 01:08:21.111972 2595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:21.112072 kubelet[2595]: E1213 01:08:21.112022 2595 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2"} Dec 13 01:08:21.112072 kubelet[2595]: E1213 01:08:21.112065 2595 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ea07a70-019b-41be-b5b8-8680d6837b86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:08:21.112158 kubelet[2595]: E1213 01:08:21.112095 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ea07a70-019b-41be-b5b8-8680d6837b86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dd4b76b86-ckqv7" podUID="8ea07a70-019b-41be-b5b8-8680d6837b86" Dec 13 01:08:21.119261 containerd[1461]: time="2024-12-13T01:08:21.119200754Z" level=error msg="StopPodSandbox for \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\" failed" error="failed to destroy network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:08:21.119548 kubelet[2595]: E1213 01:08:21.119528 2595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:21.119591 kubelet[2595]: E1213 01:08:21.119579 2595 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb"} Dec 13 01:08:21.119631 kubelet[2595]: E1213 01:08:21.119620 2595 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53ce73c4-d9e2-4a98-add0-afa55318cf9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:08:21.119675 kubelet[2595]: E1213 01:08:21.119664 2595 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53ce73c4-d9e2-4a98-add0-afa55318cf9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zplvr" podUID="53ce73c4-d9e2-4a98-add0-afa55318cf9b" Dec 13 01:08:24.067237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540638606.mount: Deactivated successfully. Dec 13 01:08:24.610099 containerd[1461]: time="2024-12-13T01:08:24.610046989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.613563 containerd[1461]: time="2024-12-13T01:08:24.613488976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:08:24.633811 containerd[1461]: time="2024-12-13T01:08:24.633767858Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.636289 containerd[1461]: time="2024-12-13T01:08:24.636245146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.636945 containerd[1461]: time="2024-12-13T01:08:24.636904542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.559435453s" Dec 13 01:08:24.636945 containerd[1461]: time="2024-12-13T01:08:24.636936793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:08:24.644271 containerd[1461]: time="2024-12-13T01:08:24.644219745Z" level=info msg="CreateContainer within sandbox \"ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:08:24.675987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1084028683.mount: Deactivated successfully. Dec 13 01:08:24.678822 containerd[1461]: time="2024-12-13T01:08:24.678783996Z" level=info msg="CreateContainer within sandbox \"ff8ba565ea3dd3f27df8e152e24a8bc62bddabc88c3b01135bcec3296528d2cd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1e21bd98b0cde4c9af7b1323845539b8a7fc11099e5520162b5e88a890962d74\"" Dec 13 01:08:24.679587 containerd[1461]: time="2024-12-13T01:08:24.679541787Z" level=info msg="StartContainer for \"1e21bd98b0cde4c9af7b1323845539b8a7fc11099e5520162b5e88a890962d74\"" Dec 13 01:08:24.740687 systemd[1]: Started cri-containerd-1e21bd98b0cde4c9af7b1323845539b8a7fc11099e5520162b5e88a890962d74.scope - libcontainer container 1e21bd98b0cde4c9af7b1323845539b8a7fc11099e5520162b5e88a890962d74. Dec 13 01:08:24.774621 containerd[1461]: time="2024-12-13T01:08:24.774578582Z" level=info msg="StartContainer for \"1e21bd98b0cde4c9af7b1323845539b8a7fc11099e5520162b5e88a890962d74\" returns successfully" Dec 13 01:08:24.839468 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:08:24.839686 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:08:25.098580 kubelet[2595]: E1213 01:08:25.098532 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:25.112934 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:33990.service - OpenSSH per-connection server daemon (10.0.0.1:33990). Dec 13 01:08:25.132125 kubelet[2595]: I1213 01:08:25.132080 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-47gch" podStartSLOduration=1.6403012380000002 podStartE2EDuration="21.132036771s" podCreationTimestamp="2024-12-13 01:08:04 +0000 UTC" firstStartedPulling="2024-12-13 01:08:05.145445979 +0000 UTC m=+24.304714107" lastFinishedPulling="2024-12-13 01:08:24.637181502 +0000 UTC m=+43.796449640" observedRunningTime="2024-12-13 01:08:25.131628466 +0000 UTC m=+44.290896604" watchObservedRunningTime="2024-12-13 01:08:25.132036771 +0000 UTC m=+44.291304899" Dec 13 01:08:25.161706 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 33990 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:25.163301 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:25.169503 systemd-logind[1444]: New session 15 of user core. Dec 13 01:08:25.175541 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:08:25.297099 sshd[3781]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:25.301022 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:33990.service: Deactivated successfully. Dec 13 01:08:25.302832 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:08:25.303632 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:08:25.304436 systemd-logind[1444]: Removed session 15. Dec 13 01:08:26.100169 kubelet[2595]: E1213 01:08:26.100130 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:30.308322 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:38206.service - OpenSSH per-connection server daemon (10.0.0.1:38206). Dec 13 01:08:30.347023 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 38206 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:30.348771 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:30.353326 systemd-logind[1444]: New session 16 of user core. Dec 13 01:08:30.362518 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:08:30.495420 sshd[4024]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:30.501117 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:38206.service: Deactivated successfully. Dec 13 01:08:30.504281 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:08:30.505333 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:08:30.506624 systemd-logind[1444]: Removed session 16. Dec 13 01:08:30.933439 kubelet[2595]: I1213 01:08:30.933369 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:08:30.934075 kubelet[2595]: E1213 01:08:30.934045 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:30.958229 containerd[1461]: time="2024-12-13T01:08:30.958085377Z" level=info msg="StopPodSandbox for \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\"" Dec 13 01:08:31.109444 kubelet[2595]: E1213 01:08:31.109309 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.016 [INFO][4078] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.017 [INFO][4078] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" iface="eth0" netns="/var/run/netns/cni-d5b91b53-84ac-01ee-8c53-ec7bacde8444" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.017 [INFO][4078] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" iface="eth0" netns="/var/run/netns/cni-d5b91b53-84ac-01ee-8c53-ec7bacde8444" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.018 [INFO][4078] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" iface="eth0" netns="/var/run/netns/cni-d5b91b53-84ac-01ee-8c53-ec7bacde8444" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.018 [INFO][4078] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.018 [INFO][4078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.071 [INFO][4086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.071 [INFO][4086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.071 [INFO][4086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.119 [WARNING][4086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.119 [INFO][4086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.145 [INFO][4086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:31.151221 containerd[1461]: 2024-12-13 01:08:31.148 [INFO][4078] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:31.151881 containerd[1461]: time="2024-12-13T01:08:31.151455550Z" level=info msg="TearDown network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\" successfully" Dec 13 01:08:31.151881 containerd[1461]: time="2024-12-13T01:08:31.151489163Z" level=info msg="StopPodSandbox for \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\" returns successfully" Dec 13 01:08:31.152835 containerd[1461]: time="2024-12-13T01:08:31.152685867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f8c64c55-j6szh,Uid:56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:08:31.154205 systemd[1]: run-netns-cni\x2dd5b91b53\x2d84ac\x2d01ee\x2d8c53\x2dec7bacde8444.mount: Deactivated successfully. Dec 13 01:08:31.412949 systemd-networkd[1395]: cali4cde4d09e29: Link UP Dec 13 01:08:31.413789 systemd-networkd[1395]: cali4cde4d09e29: Gained carrier Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.327 [INFO][4096] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.337 [INFO][4096] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0 calico-apiserver-79f8c64c55- calico-apiserver 56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2 893 0 2024-12-13 01:08:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f8c64c55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79f8c64c55-j6szh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4cde4d09e29 [] []}} ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-j6szh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.338 [INFO][4096] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-j6szh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.368 [INFO][4109] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" HandleID="k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.376 [INFO][4109] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" HandleID="k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003618f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79f8c64c55-j6szh", "timestamp":"2024-12-13 01:08:31.368317916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.376 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.376 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.376 [INFO][4109] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.377 [INFO][4109] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.383 [INFO][4109] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.388 [INFO][4109] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.389 [INFO][4109] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.391 [INFO][4109] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.391 [INFO][4109] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.393 [INFO][4109] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.398 [INFO][4109] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.402 [INFO][4109] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.402 [INFO][4109] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" host="localhost" Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.402 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:31.429472 containerd[1461]: 2024-12-13 01:08:31.402 [INFO][4109] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" HandleID="k8s-pod-network.c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.430226 containerd[1461]: 2024-12-13 01:08:31.405 [INFO][4096] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-j6szh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0", GenerateName:"calico-apiserver-79f8c64c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f8c64c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79f8c64c55-j6szh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4cde4d09e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:31.430226 containerd[1461]: 2024-12-13 01:08:31.405 [INFO][4096] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-j6szh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.430226 containerd[1461]: 2024-12-13 01:08:31.405 [INFO][4096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cde4d09e29 ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-j6szh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.430226 containerd[1461]: 2024-12-13 01:08:31.413 [INFO][4096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-j6szh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.430226 containerd[1461]: 2024-12-13 01:08:31.413 [INFO][4096] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-j6szh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0", GenerateName:"calico-apiserver-79f8c64c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f8c64c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca", Pod:"calico-apiserver-79f8c64c55-j6szh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4cde4d09e29", MAC:"f6:2d:ec:35:62:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:31.430226 containerd[1461]: 2024-12-13 01:08:31.426 [INFO][4096] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-j6szh" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:31.462262 containerd[1461]: time="2024-12-13T01:08:31.462174657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:31.462699 containerd[1461]: time="2024-12-13T01:08:31.462593242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:31.462699 containerd[1461]: time="2024-12-13T01:08:31.462666970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:31.462965 containerd[1461]: time="2024-12-13T01:08:31.462895509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:31.498643 systemd[1]: Started cri-containerd-c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca.scope - libcontainer container c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca. Dec 13 01:08:31.514158 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:08:31.546215 containerd[1461]: time="2024-12-13T01:08:31.546168717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f8c64c55-j6szh,Uid:56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca\"" Dec 13 01:08:31.548750 containerd[1461]: time="2024-12-13T01:08:31.548611228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:08:31.564437 kernel: bpftool[4193]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:08:31.837638 systemd-networkd[1395]: vxlan.calico: Link UP Dec 13 01:08:31.837650 systemd-networkd[1395]: vxlan.calico: Gained carrier Dec 13 01:08:31.956623 containerd[1461]: time="2024-12-13T01:08:31.956569971Z" level=info msg="StopPodSandbox for \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\"" Dec 13 01:08:31.957198 containerd[1461]: time="2024-12-13T01:08:31.957104414Z" level=info msg="StopPodSandbox for \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\"" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.009 [INFO][4298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.009 [INFO][4298] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" iface="eth0" netns="/var/run/netns/cni-b6098f69-85bd-b0a3-cc8c-c72317206065" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.010 [INFO][4298] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" iface="eth0" netns="/var/run/netns/cni-b6098f69-85bd-b0a3-cc8c-c72317206065" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.011 [INFO][4298] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" iface="eth0" netns="/var/run/netns/cni-b6098f69-85bd-b0a3-cc8c-c72317206065" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.011 [INFO][4298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.011 [INFO][4298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.036 [INFO][4317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.036 [INFO][4317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.036 [INFO][4317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.044 [WARNING][4317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.044 [INFO][4317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.045 [INFO][4317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:32.054201 containerd[1461]: 2024-12-13 01:08:32.051 [INFO][4298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:32.054966 containerd[1461]: time="2024-12-13T01:08:32.054381489Z" level=info msg="TearDown network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\" successfully" Dec 13 01:08:32.054966 containerd[1461]: time="2024-12-13T01:08:32.054450478Z" level=info msg="StopPodSandbox for \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\" returns successfully" Dec 13 01:08:32.055021 kubelet[2595]: E1213 01:08:32.054795 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:32.056026 containerd[1461]: time="2024-12-13T01:08:32.055669354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gcxjc,Uid:c89f9093-1b31-49a1-b329-531dacccd48c,Namespace:kube-system,Attempt:1,}" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.021 [INFO][4307] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.021 [INFO][4307] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" iface="eth0" netns="/var/run/netns/cni-87f2caf6-1982-f33c-083e-a5c423f03cd4" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.022 [INFO][4307] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" iface="eth0" netns="/var/run/netns/cni-87f2caf6-1982-f33c-083e-a5c423f03cd4" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.022 [INFO][4307] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" iface="eth0" netns="/var/run/netns/cni-87f2caf6-1982-f33c-083e-a5c423f03cd4" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.023 [INFO][4307] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.023 [INFO][4307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.051 [INFO][4322] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.051 [INFO][4322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.051 [INFO][4322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.059 [WARNING][4322] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.059 [INFO][4322] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.061 [INFO][4322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:32.066757 containerd[1461]: 2024-12-13 01:08:32.063 [INFO][4307] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:32.067377 containerd[1461]: time="2024-12-13T01:08:32.067343322Z" level=info msg="TearDown network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\" successfully" Dec 13 01:08:32.067377 containerd[1461]: time="2024-12-13T01:08:32.067373799Z" level=info msg="StopPodSandbox for \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\" returns successfully" Dec 13 01:08:32.068029 containerd[1461]: time="2024-12-13T01:08:32.068004892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd4b76b86-ckqv7,Uid:8ea07a70-019b-41be-b5b8-8680d6837b86,Namespace:calico-system,Attempt:1,}" Dec 13 01:08:32.163118 systemd[1]: run-containerd-runc-k8s.io-c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca-runc.sDb4lf.mount: Deactivated successfully. Dec 13 01:08:32.165111 systemd[1]: run-netns-cni\x2d87f2caf6\x2d1982\x2df33c\x2d083e\x2da5c423f03cd4.mount: Deactivated successfully. Dec 13 01:08:32.165450 systemd[1]: run-netns-cni\x2db6098f69\x2d85bd\x2db0a3\x2dcc8c\x2dc72317206065.mount: Deactivated successfully. Dec 13 01:08:32.217277 systemd-networkd[1395]: cali74ab1cea06e: Link UP Dec 13 01:08:32.218072 systemd-networkd[1395]: cali74ab1cea06e: Gained carrier Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.118 [INFO][4333] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--gcxjc-eth0 coredns-76f75df574- kube-system c89f9093-1b31-49a1-b329-531dacccd48c 903 0 2024-12-13 01:07:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-gcxjc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali74ab1cea06e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Namespace="kube-system" Pod="coredns-76f75df574-gcxjc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gcxjc-" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.118 [INFO][4333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Namespace="kube-system" Pod="coredns-76f75df574-gcxjc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.175 [INFO][4368] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" HandleID="k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.183 [INFO][4368] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" HandleID="k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502e60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-gcxjc", "timestamp":"2024-12-13 01:08:32.175069405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.183 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.183 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.183 [INFO][4368] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.184 [INFO][4368] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.189 [INFO][4368] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.195 [INFO][4368] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.197 [INFO][4368] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.199 [INFO][4368] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.199 [INFO][4368] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.201 [INFO][4368] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0 Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.205 [INFO][4368] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.210 [INFO][4368] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.210 [INFO][4368] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" host="localhost" Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.210 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:32.231044 containerd[1461]: 2024-12-13 01:08:32.210 [INFO][4368] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" HandleID="k8s-pod-network.6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.231633 containerd[1461]: 2024-12-13 01:08:32.212 [INFO][4333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Namespace="kube-system" Pod="coredns-76f75df574-gcxjc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gcxjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gcxjc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c89f9093-1b31-49a1-b329-531dacccd48c", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-gcxjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74ab1cea06e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:32.231633 containerd[1461]: 2024-12-13 01:08:32.212 [INFO][4333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Namespace="kube-system" Pod="coredns-76f75df574-gcxjc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.231633 containerd[1461]: 2024-12-13 01:08:32.213 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74ab1cea06e ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Namespace="kube-system" Pod="coredns-76f75df574-gcxjc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.231633 containerd[1461]: 2024-12-13 01:08:32.218 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Namespace="kube-system" Pod="coredns-76f75df574-gcxjc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.231633 containerd[1461]: 2024-12-13 01:08:32.219 [INFO][4333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Namespace="kube-system" Pod="coredns-76f75df574-gcxjc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gcxjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gcxjc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c89f9093-1b31-49a1-b329-531dacccd48c", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0", Pod:"coredns-76f75df574-gcxjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74ab1cea06e", MAC:"f2:fa:20:58:9c:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:32.231633 containerd[1461]: 2024-12-13 01:08:32.228 [INFO][4333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0" Namespace="kube-system" Pod="coredns-76f75df574-gcxjc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:32.254647 systemd-networkd[1395]: calibb58b0b9604: Link UP Dec 13 01:08:32.254916 systemd-networkd[1395]: calibb58b0b9604: Gained carrier Dec 13 01:08:32.262613 containerd[1461]: time="2024-12-13T01:08:32.262521189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:32.263091 containerd[1461]: time="2024-12-13T01:08:32.262726574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:32.263091 containerd[1461]: time="2024-12-13T01:08:32.262785484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:32.263158 containerd[1461]: time="2024-12-13T01:08:32.263085848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:32.293532 systemd[1]: Started cri-containerd-6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0.scope - libcontainer container 6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0. Dec 13 01:08:32.307135 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:08:32.336520 containerd[1461]: time="2024-12-13T01:08:32.336459656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gcxjc,Uid:c89f9093-1b31-49a1-b329-531dacccd48c,Namespace:kube-system,Attempt:1,} returns sandbox id \"6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0\"" Dec 13 01:08:32.337269 kubelet[2595]: E1213 01:08:32.337246 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:32.339554 containerd[1461]: time="2024-12-13T01:08:32.339514205Z" level=info msg="CreateContainer within sandbox \"6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.152 [INFO][4351] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0 calico-kube-controllers-dd4b76b86- calico-system 8ea07a70-019b-41be-b5b8-8680d6837b86 904 0 2024-12-13 01:08:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dd4b76b86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dd4b76b86-ckqv7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibb58b0b9604 [] []}} ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Namespace="calico-system" Pod="calico-kube-controllers-dd4b76b86-ckqv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.152 [INFO][4351] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Namespace="calico-system" Pod="calico-kube-controllers-dd4b76b86-ckqv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.193 [INFO][4392] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" HandleID="k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.201 [INFO][4392] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" HandleID="k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003096b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dd4b76b86-ckqv7", "timestamp":"2024-12-13 01:08:32.193770673 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.201 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.210 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.210 [INFO][4392] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.212 [INFO][4392] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.217 [INFO][4392] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.222 [INFO][4392] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.227 [INFO][4392] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.230 [INFO][4392] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.230 [INFO][4392] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.231 [INFO][4392] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9 Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.236 [INFO][4392] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.243 [INFO][4392] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.243 [INFO][4392] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" host="localhost" Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.243 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:32.390992 containerd[1461]: 2024-12-13 01:08:32.243 [INFO][4392] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" HandleID="k8s-pod-network.bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.391579 containerd[1461]: 2024-12-13 01:08:32.251 [INFO][4351] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Namespace="calico-system" Pod="calico-kube-controllers-dd4b76b86-ckqv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0", GenerateName:"calico-kube-controllers-dd4b76b86-", Namespace:"calico-system", SelfLink:"", UID:"8ea07a70-019b-41be-b5b8-8680d6837b86", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd4b76b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dd4b76b86-ckqv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb58b0b9604", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:32.391579 containerd[1461]: 2024-12-13 01:08:32.251 [INFO][4351] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Namespace="calico-system" Pod="calico-kube-controllers-dd4b76b86-ckqv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.391579 containerd[1461]: 2024-12-13 01:08:32.251 [INFO][4351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb58b0b9604 ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Namespace="calico-system" Pod="calico-kube-controllers-dd4b76b86-ckqv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.391579 containerd[1461]: 2024-12-13 01:08:32.254 [INFO][4351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Namespace="calico-system" Pod="calico-kube-controllers-dd4b76b86-ckqv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.391579 containerd[1461]: 2024-12-13 01:08:32.255 [INFO][4351] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Namespace="calico-system" Pod="calico-kube-controllers-dd4b76b86-ckqv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0", GenerateName:"calico-kube-controllers-dd4b76b86-", Namespace:"calico-system", SelfLink:"", UID:"8ea07a70-019b-41be-b5b8-8680d6837b86", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd4b76b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9", Pod:"calico-kube-controllers-dd4b76b86-ckqv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb58b0b9604", MAC:"e2:87:d3:ab:74:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:32.391579 containerd[1461]: 2024-12-13 01:08:32.386 [INFO][4351] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9" Namespace="calico-system" Pod="calico-kube-controllers-dd4b76b86-ckqv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:32.686446 containerd[1461]: time="2024-12-13T01:08:32.686284096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:32.686446 containerd[1461]: time="2024-12-13T01:08:32.686363505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:32.686446 containerd[1461]: time="2024-12-13T01:08:32.686377351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:32.686650 containerd[1461]: time="2024-12-13T01:08:32.686522563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:32.720716 systemd[1]: Started cri-containerd-bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9.scope - libcontainer container bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9. Dec 13 01:08:32.735096 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:08:32.764486 containerd[1461]: time="2024-12-13T01:08:32.764437670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd4b76b86-ckqv7,Uid:8ea07a70-019b-41be-b5b8-8680d6837b86,Namespace:calico-system,Attempt:1,} returns sandbox id \"bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9\"" Dec 13 01:08:32.778597 systemd-networkd[1395]: cali4cde4d09e29: Gained IPv6LL Dec 13 01:08:32.960462 containerd[1461]: time="2024-12-13T01:08:32.957754035Z" level=info msg="StopPodSandbox for \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\"" Dec 13 01:08:32.960462 containerd[1461]: time="2024-12-13T01:08:32.957989396Z" level=info msg="StopPodSandbox for \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\"" Dec 13 01:08:32.960462 containerd[1461]: time="2024-12-13T01:08:32.958936464Z" level=info msg="StopPodSandbox for \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\"" Dec 13 01:08:33.002798 containerd[1461]: time="2024-12-13T01:08:33.002730959Z" level=info msg="CreateContainer within sandbox \"6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43cbed3e2f21e78670ac0e8c68975db7d1e0fe1b27c6c83987c0db0daa85d4d1\"" Dec 13 01:08:33.004991 containerd[1461]: time="2024-12-13T01:08:33.004920165Z" level=info msg="StartContainer for \"43cbed3e2f21e78670ac0e8c68975db7d1e0fe1b27c6c83987c0db0daa85d4d1\"" Dec 13 01:08:33.041610 systemd[1]: Started cri-containerd-43cbed3e2f21e78670ac0e8c68975db7d1e0fe1b27c6c83987c0db0daa85d4d1.scope - libcontainer container 43cbed3e2f21e78670ac0e8c68975db7d1e0fe1b27c6c83987c0db0daa85d4d1. Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.131 [INFO][4565] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.131 [INFO][4565] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" iface="eth0" netns="/var/run/netns/cni-bdb7f582-0b6b-a8f9-da36-7e34a1ffa6ba" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.132 [INFO][4565] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" iface="eth0" netns="/var/run/netns/cni-bdb7f582-0b6b-a8f9-da36-7e34a1ffa6ba" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.132 [INFO][4565] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" iface="eth0" netns="/var/run/netns/cni-bdb7f582-0b6b-a8f9-da36-7e34a1ffa6ba" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.132 [INFO][4565] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.132 [INFO][4565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.153 [INFO][4616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.153 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.153 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.161 [WARNING][4616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.161 [INFO][4616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.163 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:33.169005 containerd[1461]: 2024-12-13 01:08:33.165 [INFO][4565] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:33.172434 containerd[1461]: time="2024-12-13T01:08:33.169184665Z" level=info msg="TearDown network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\" successfully" Dec 13 01:08:33.172434 containerd[1461]: time="2024-12-13T01:08:33.171517220Z" level=info msg="StopPodSandbox for \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\" returns successfully" Dec 13 01:08:33.172651 systemd[1]: run-netns-cni\x2dbdb7f582\x2d0b6b\x2da8f9\x2dda36\x2d7e34a1ffa6ba.mount: Deactivated successfully. Dec 13 01:08:33.174635 containerd[1461]: time="2024-12-13T01:08:33.174608679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f8c64c55-jqxwq,Uid:048e2ce0-7d8d-4a74-8789-6bbabf5e378c,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:08:33.209012 containerd[1461]: time="2024-12-13T01:08:33.208924835Z" level=info msg="StartContainer for \"43cbed3e2f21e78670ac0e8c68975db7d1e0fe1b27c6c83987c0db0daa85d4d1\" returns successfully" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.161 [INFO][4564] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.161 [INFO][4564] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" iface="eth0" netns="/var/run/netns/cni-22b4c143-136d-3c6e-2d2c-fdb91c86b5aa" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.162 [INFO][4564] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" iface="eth0" netns="/var/run/netns/cni-22b4c143-136d-3c6e-2d2c-fdb91c86b5aa" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.162 [INFO][4564] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" iface="eth0" netns="/var/run/netns/cni-22b4c143-136d-3c6e-2d2c-fdb91c86b5aa" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.162 [INFO][4564] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.162 [INFO][4564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.195 [INFO][4624] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.195 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.195 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.201 [WARNING][4624] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.201 [INFO][4624] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.204 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:33.209141 containerd[1461]: 2024-12-13 01:08:33.206 [INFO][4564] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:33.211297 containerd[1461]: time="2024-12-13T01:08:33.211214500Z" level=info msg="TearDown network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\" successfully" Dec 13 01:08:33.211297 containerd[1461]: time="2024-12-13T01:08:33.211238355Z" level=info msg="StopPodSandbox for \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\" returns successfully" Dec 13 01:08:33.212059 kubelet[2595]: E1213 01:08:33.211510 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:33.212424 containerd[1461]: time="2024-12-13T01:08:33.211866963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57lkc,Uid:6cab1421-2490-4e1e-a106-3059cdb91580,Namespace:kube-system,Attempt:1,}" Dec 13 01:08:33.212510 systemd[1]: run-netns-cni\x2d22b4c143\x2d136d\x2d3c6e\x2d2d2c\x2dfdb91c86b5aa.mount: Deactivated successfully. Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.161 [INFO][4563] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.163 [INFO][4563] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" iface="eth0" netns="/var/run/netns/cni-c8e4b75c-18a5-a9de-2153-618270721068" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.163 [INFO][4563] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" iface="eth0" netns="/var/run/netns/cni-c8e4b75c-18a5-a9de-2153-618270721068" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.163 [INFO][4563] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" iface="eth0" netns="/var/run/netns/cni-c8e4b75c-18a5-a9de-2153-618270721068" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.163 [INFO][4563] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.163 [INFO][4563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.205 [INFO][4625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.205 [INFO][4625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.205 [INFO][4625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.339 [WARNING][4625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.339 [INFO][4625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.341 [INFO][4625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:33.346894 containerd[1461]: 2024-12-13 01:08:33.344 [INFO][4563] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:33.347515 containerd[1461]: time="2024-12-13T01:08:33.347452189Z" level=info msg="TearDown network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\" successfully" Dec 13 01:08:33.347515 containerd[1461]: time="2024-12-13T01:08:33.347490110Z" level=info msg="StopPodSandbox for \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\" returns successfully" Dec 13 01:08:33.348177 containerd[1461]: time="2024-12-13T01:08:33.348147384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zplvr,Uid:53ce73c4-d9e2-4a98-add0-afa55318cf9b,Namespace:calico-system,Attempt:1,}" Dec 13 01:08:33.350005 systemd[1]: run-netns-cni\x2dc8e4b75c\x2d18a5\x2da9de\x2d2153\x2d618270721068.mount: Deactivated successfully. Dec 13 01:08:33.674640 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Dec 13 01:08:33.738621 systemd-networkd[1395]: calibb58b0b9604: Gained IPv6LL Dec 13 01:08:33.803693 systemd-networkd[1395]: cali74ab1cea06e: Gained IPv6LL Dec 13 01:08:33.923002 systemd-networkd[1395]: cali18fb3e71582: Link UP Dec 13 01:08:33.923226 systemd-networkd[1395]: cali18fb3e71582: Gained carrier Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.841 [INFO][4650] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0 calico-apiserver-79f8c64c55- calico-apiserver 048e2ce0-7d8d-4a74-8789-6bbabf5e378c 923 0 2024-12-13 01:08:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f8c64c55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79f8c64c55-jqxwq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali18fb3e71582 [] []}} ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-jqxwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.841 [INFO][4650] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-jqxwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.879 [INFO][4688] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" HandleID="k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.887 [INFO][4688] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" HandleID="k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027fd00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79f8c64c55-jqxwq", "timestamp":"2024-12-13 01:08:33.879733344 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.887 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.887 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.887 [INFO][4688] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.888 [INFO][4688] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.892 [INFO][4688] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.898 [INFO][4688] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.901 [INFO][4688] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.903 [INFO][4688] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.903 [INFO][4688] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.905 [INFO][4688] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33 Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.909 [INFO][4688] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.914 [INFO][4688] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.915 [INFO][4688] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" host="localhost" Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.915 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:33.943183 containerd[1461]: 2024-12-13 01:08:33.915 [INFO][4688] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" HandleID="k8s-pod-network.9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.945244 containerd[1461]: 2024-12-13 01:08:33.918 [INFO][4650] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-jqxwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0", GenerateName:"calico-apiserver-79f8c64c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"048e2ce0-7d8d-4a74-8789-6bbabf5e378c", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f8c64c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79f8c64c55-jqxwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18fb3e71582", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:33.945244 containerd[1461]: 2024-12-13 01:08:33.919 [INFO][4650] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-jqxwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.945244 containerd[1461]: 2024-12-13 01:08:33.919 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18fb3e71582 ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-jqxwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.945244 containerd[1461]: 2024-12-13 01:08:33.923 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-jqxwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.945244 containerd[1461]: 2024-12-13 01:08:33.923 [INFO][4650] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-jqxwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0", GenerateName:"calico-apiserver-79f8c64c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"048e2ce0-7d8d-4a74-8789-6bbabf5e378c", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f8c64c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33", Pod:"calico-apiserver-79f8c64c55-jqxwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18fb3e71582", MAC:"76:95:80:dd:08:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:33.945244 containerd[1461]: 2024-12-13 01:08:33.937 [INFO][4650] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33" Namespace="calico-apiserver" Pod="calico-apiserver-79f8c64c55-jqxwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:33.953113 systemd-networkd[1395]: calie5df915fb09: Link UP Dec 13 01:08:33.954389 systemd-networkd[1395]: calie5df915fb09: Gained carrier Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.858 [INFO][4660] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zplvr-eth0 csi-node-driver- calico-system 53ce73c4-d9e2-4a98-add0-afa55318cf9b 924 0 2024-12-13 01:08:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zplvr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie5df915fb09 [] []}} ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Namespace="calico-system" Pod="csi-node-driver-zplvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--zplvr-" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.860 [INFO][4660] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Namespace="calico-system" Pod="csi-node-driver-zplvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.903 [INFO][4694] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" HandleID="k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.911 [INFO][4694] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" HandleID="k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zplvr", "timestamp":"2024-12-13 01:08:33.903032937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.911 [INFO][4694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.915 [INFO][4694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.915 [INFO][4694] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.916 [INFO][4694] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.920 [INFO][4694] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.925 [INFO][4694] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.927 [INFO][4694] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.930 [INFO][4694] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.930 [INFO][4694] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.934 [INFO][4694] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917 Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.937 [INFO][4694] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.943 [INFO][4694] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.943 [INFO][4694] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" host="localhost" Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.943 [INFO][4694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:33.976308 containerd[1461]: 2024-12-13 01:08:33.943 [INFO][4694] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" HandleID="k8s-pod-network.f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.977363 containerd[1461]: 2024-12-13 01:08:33.949 [INFO][4660] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Namespace="calico-system" Pod="csi-node-driver-zplvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--zplvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zplvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53ce73c4-d9e2-4a98-add0-afa55318cf9b", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zplvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5df915fb09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:33.977363 containerd[1461]: 2024-12-13 01:08:33.949 [INFO][4660] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Namespace="calico-system" Pod="csi-node-driver-zplvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.977363 containerd[1461]: 2024-12-13 01:08:33.949 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5df915fb09 ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Namespace="calico-system" Pod="csi-node-driver-zplvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.977363 containerd[1461]: 2024-12-13 01:08:33.955 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Namespace="calico-system" Pod="csi-node-driver-zplvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.977363 containerd[1461]: 2024-12-13 01:08:33.955 [INFO][4660] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Namespace="calico-system" Pod="csi-node-driver-zplvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--zplvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zplvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53ce73c4-d9e2-4a98-add0-afa55318cf9b", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917", Pod:"csi-node-driver-zplvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5df915fb09", MAC:"06:26:32:0e:ba:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:33.977363 containerd[1461]: 2024-12-13 01:08:33.969 [INFO][4660] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917" Namespace="calico-system" Pod="csi-node-driver-zplvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:33.982066 containerd[1461]: time="2024-12-13T01:08:33.979485829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:33.982185 containerd[1461]: time="2024-12-13T01:08:33.982044619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:33.982185 containerd[1461]: time="2024-12-13T01:08:33.982057603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:33.982185 containerd[1461]: time="2024-12-13T01:08:33.982139637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:33.993302 systemd-networkd[1395]: cali1a45482d3f6: Link UP Dec 13 01:08:33.994688 systemd-networkd[1395]: cali1a45482d3f6: Gained carrier Dec 13 01:08:34.003646 systemd[1]: Started cri-containerd-9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33.scope - libcontainer container 9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33. Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.859 [INFO][4671] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--57lkc-eth0 coredns-76f75df574- kube-system 6cab1421-2490-4e1e-a106-3059cdb91580 925 0 2024-12-13 01:07:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-57lkc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1a45482d3f6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Namespace="kube-system" Pod="coredns-76f75df574-57lkc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57lkc-" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.859 [INFO][4671] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Namespace="kube-system" Pod="coredns-76f75df574-57lkc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.905 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" HandleID="k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.913 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" HandleID="k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c21d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-57lkc", "timestamp":"2024-12-13 01:08:33.905040833 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.913 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.943 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.945 [INFO][4699] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.948 [INFO][4699] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.953 [INFO][4699] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.958 [INFO][4699] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.960 [INFO][4699] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.968 [INFO][4699] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.968 [INFO][4699] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.969 [INFO][4699] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6 Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.974 [INFO][4699] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.982 [INFO][4699] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.982 [INFO][4699] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" host="localhost" Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.982 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:34.010650 containerd[1461]: 2024-12-13 01:08:33.982 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" HandleID="k8s-pod-network.bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:34.011265 containerd[1461]: 2024-12-13 01:08:33.987 [INFO][4671] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Namespace="kube-system" Pod="coredns-76f75df574-57lkc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57lkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--57lkc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6cab1421-2490-4e1e-a106-3059cdb91580", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-57lkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a45482d3f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:34.011265 containerd[1461]: 2024-12-13 01:08:33.987 [INFO][4671] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Namespace="kube-system" Pod="coredns-76f75df574-57lkc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:34.011265 containerd[1461]: 2024-12-13 01:08:33.987 [INFO][4671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a45482d3f6 ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Namespace="kube-system" Pod="coredns-76f75df574-57lkc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:34.011265 containerd[1461]: 2024-12-13 01:08:33.995 [INFO][4671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Namespace="kube-system" Pod="coredns-76f75df574-57lkc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:34.011265 containerd[1461]: 2024-12-13 01:08:33.996 [INFO][4671] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Namespace="kube-system" Pod="coredns-76f75df574-57lkc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57lkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--57lkc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6cab1421-2490-4e1e-a106-3059cdb91580", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6", Pod:"coredns-76f75df574-57lkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a45482d3f6", MAC:"02:6c:01:91:ef:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:34.011265 containerd[1461]: 2024-12-13 01:08:34.007 [INFO][4671] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6" Namespace="kube-system" Pod="coredns-76f75df574-57lkc" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:34.016543 containerd[1461]: time="2024-12-13T01:08:34.016016620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:34.016543 containerd[1461]: time="2024-12-13T01:08:34.016079207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:34.016543 containerd[1461]: time="2024-12-13T01:08:34.016092822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:34.016543 containerd[1461]: time="2024-12-13T01:08:34.016172191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:34.025702 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:08:34.040578 systemd[1]: Started cri-containerd-f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917.scope - libcontainer container f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917. Dec 13 01:08:34.057439 containerd[1461]: time="2024-12-13T01:08:34.057128561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:34.057439 containerd[1461]: time="2024-12-13T01:08:34.057188954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:34.057439 containerd[1461]: time="2024-12-13T01:08:34.057218580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:34.057439 containerd[1461]: time="2024-12-13T01:08:34.057331542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:34.060082 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:08:34.065942 containerd[1461]: time="2024-12-13T01:08:34.065614308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f8c64c55-jqxwq,Uid:048e2ce0-7d8d-4a74-8789-6bbabf5e378c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33\"" Dec 13 01:08:34.081992 systemd[1]: Started cri-containerd-bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6.scope - libcontainer container bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6. Dec 13 01:08:34.086449 containerd[1461]: time="2024-12-13T01:08:34.086409002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zplvr,Uid:53ce73c4-d9e2-4a98-add0-afa55318cf9b,Namespace:calico-system,Attempt:1,} returns sandbox id \"f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917\"" Dec 13 01:08:34.098662 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:08:34.132444 kubelet[2595]: E1213 01:08:34.132416 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:34.158018 containerd[1461]: time="2024-12-13T01:08:34.157974956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57lkc,Uid:6cab1421-2490-4e1e-a106-3059cdb91580,Namespace:kube-system,Attempt:1,} returns sandbox id \"bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6\"" Dec 13 01:08:34.159288 kubelet[2595]: E1213 01:08:34.159262 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:34.163370 kubelet[2595]: I1213 01:08:34.163348 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gcxjc" podStartSLOduration=40.16329764 podStartE2EDuration="40.16329764s" podCreationTimestamp="2024-12-13 01:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:34.146335013 +0000 UTC m=+53.305603141" watchObservedRunningTime="2024-12-13 01:08:34.16329764 +0000 UTC m=+53.322565768" Dec 13 01:08:34.168109 containerd[1461]: time="2024-12-13T01:08:34.167907857Z" level=info msg="CreateContainer within sandbox \"bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:08:34.196767 containerd[1461]: time="2024-12-13T01:08:34.196659916Z" level=info msg="CreateContainer within sandbox \"bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd1e58834db84c18230ea178185ad3222bccdd3fe877b6ac8442e88963841ec6\"" Dec 13 01:08:34.197858 containerd[1461]: time="2024-12-13T01:08:34.197786739Z" level=info msg="StartContainer for \"dd1e58834db84c18230ea178185ad3222bccdd3fe877b6ac8442e88963841ec6\"" Dec 13 01:08:34.257575 systemd[1]: Started cri-containerd-dd1e58834db84c18230ea178185ad3222bccdd3fe877b6ac8442e88963841ec6.scope - libcontainer container dd1e58834db84c18230ea178185ad3222bccdd3fe877b6ac8442e88963841ec6. Dec 13 01:08:34.339449 containerd[1461]: time="2024-12-13T01:08:34.339387837Z" level=info msg="StartContainer for \"dd1e58834db84c18230ea178185ad3222bccdd3fe877b6ac8442e88963841ec6\" returns successfully" Dec 13 01:08:34.773553 containerd[1461]: time="2024-12-13T01:08:34.773504776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:34.774454 containerd[1461]: time="2024-12-13T01:08:34.774279109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:08:34.776074 containerd[1461]: time="2024-12-13T01:08:34.776034912Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:34.779274 containerd[1461]: time="2024-12-13T01:08:34.779248439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:34.779740 containerd[1461]: time="2024-12-13T01:08:34.779716948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.231066616s" Dec 13 01:08:34.779798 containerd[1461]: time="2024-12-13T01:08:34.779745161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:08:34.780449 containerd[1461]: time="2024-12-13T01:08:34.780411912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:08:34.782093 containerd[1461]: time="2024-12-13T01:08:34.782063820Z" level=info msg="CreateContainer within sandbox \"c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:08:34.795590 containerd[1461]: time="2024-12-13T01:08:34.795549144Z" level=info msg="CreateContainer within sandbox \"c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b834e95a079ecc4528b43584edd8b016f541d7ac9cf3bf4801fb07cb0b84c0e4\"" Dec 13 01:08:34.798219 containerd[1461]: time="2024-12-13T01:08:34.798179658Z" level=info msg="StartContainer for \"b834e95a079ecc4528b43584edd8b016f541d7ac9cf3bf4801fb07cb0b84c0e4\"" Dec 13 01:08:34.831660 systemd[1]: Started cri-containerd-b834e95a079ecc4528b43584edd8b016f541d7ac9cf3bf4801fb07cb0b84c0e4.scope - libcontainer container b834e95a079ecc4528b43584edd8b016f541d7ac9cf3bf4801fb07cb0b84c0e4. Dec 13 01:08:35.071440 containerd[1461]: time="2024-12-13T01:08:35.071247522Z" level=info msg="StartContainer for \"b834e95a079ecc4528b43584edd8b016f541d7ac9cf3bf4801fb07cb0b84c0e4\" returns successfully" Dec 13 01:08:35.151993 kubelet[2595]: E1213 01:08:35.151961 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:35.153038 kubelet[2595]: E1213 01:08:35.152761 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:35.164939 kubelet[2595]: I1213 01:08:35.164898 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79f8c64c55-j6szh" podStartSLOduration=27.932583651 podStartE2EDuration="31.164843105s" podCreationTimestamp="2024-12-13 01:08:04 +0000 UTC" firstStartedPulling="2024-12-13 01:08:31.547873334 +0000 UTC m=+50.707141463" lastFinishedPulling="2024-12-13 01:08:34.780132789 +0000 UTC m=+53.939400917" observedRunningTime="2024-12-13 01:08:35.164690168 +0000 UTC m=+54.323958306" watchObservedRunningTime="2024-12-13 01:08:35.164843105 +0000 UTC m=+54.324111233" Dec 13 01:08:35.274570 systemd-networkd[1395]: calie5df915fb09: Gained IPv6LL Dec 13 01:08:35.514812 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:38220.service - OpenSSH per-connection server daemon (10.0.0.1:38220). Dec 13 01:08:35.565987 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 38220 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:35.567901 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:35.572540 systemd-logind[1444]: New session 17 of user core. Dec 13 01:08:35.581548 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:08:35.758645 sshd[4973]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:35.763931 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:38220.service: Deactivated successfully. Dec 13 01:08:35.766769 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:08:35.767960 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:08:35.769043 systemd-logind[1444]: Removed session 17. Dec 13 01:08:35.786616 systemd-networkd[1395]: cali18fb3e71582: Gained IPv6LL Dec 13 01:08:36.042642 systemd-networkd[1395]: cali1a45482d3f6: Gained IPv6LL Dec 13 01:08:36.153382 kubelet[2595]: I1213 01:08:36.153342 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:08:36.153883 kubelet[2595]: E1213 01:08:36.153777 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:36.154259 kubelet[2595]: E1213 01:08:36.154229 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:36.948593 containerd[1461]: time="2024-12-13T01:08:36.948515262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:36.949858 containerd[1461]: time="2024-12-13T01:08:36.949780765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:08:36.950892 containerd[1461]: time="2024-12-13T01:08:36.950839981Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:36.953700 containerd[1461]: time="2024-12-13T01:08:36.953655112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:36.954331 containerd[1461]: time="2024-12-13T01:08:36.954268963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.172530242s" Dec 13 01:08:36.954331 containerd[1461]: time="2024-12-13T01:08:36.954326401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:08:36.955258 containerd[1461]: time="2024-12-13T01:08:36.955033807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:08:36.963654 containerd[1461]: time="2024-12-13T01:08:36.963609753Z" level=info msg="CreateContainer within sandbox \"bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:08:36.979530 containerd[1461]: time="2024-12-13T01:08:36.979496360Z" level=info msg="CreateContainer within sandbox \"bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"29c7ac3b823af960d1bee28cdf6230f1bd24f9cb8095a000146fe29df67571e5\"" Dec 13 01:08:36.980208 containerd[1461]: time="2024-12-13T01:08:36.980164203Z" level=info msg="StartContainer for \"29c7ac3b823af960d1bee28cdf6230f1bd24f9cb8095a000146fe29df67571e5\"" Dec 13 01:08:37.018563 systemd[1]: Started cri-containerd-29c7ac3b823af960d1bee28cdf6230f1bd24f9cb8095a000146fe29df67571e5.scope - libcontainer container 29c7ac3b823af960d1bee28cdf6230f1bd24f9cb8095a000146fe29df67571e5. Dec 13 01:08:37.120884 containerd[1461]: time="2024-12-13T01:08:37.120820921Z" level=info msg="StartContainer for \"29c7ac3b823af960d1bee28cdf6230f1bd24f9cb8095a000146fe29df67571e5\" returns successfully" Dec 13 01:08:37.159710 kubelet[2595]: E1213 01:08:37.158637 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:37.167184 kubelet[2595]: I1213 01:08:37.167140 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dd4b76b86-ckqv7" podStartSLOduration=28.978816593 podStartE2EDuration="33.16708249s" podCreationTimestamp="2024-12-13 01:08:04 +0000 UTC" firstStartedPulling="2024-12-13 01:08:32.766467627 +0000 UTC m=+51.925735755" lastFinishedPulling="2024-12-13 01:08:36.954733514 +0000 UTC m=+56.114001652" observedRunningTime="2024-12-13 01:08:37.166024305 +0000 UTC m=+56.325292433" watchObservedRunningTime="2024-12-13 01:08:37.16708249 +0000 UTC m=+56.326350618" Dec 13 01:08:37.167429 kubelet[2595]: I1213 01:08:37.167296 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-57lkc" podStartSLOduration=43.167272056 podStartE2EDuration="43.167272056s" podCreationTimestamp="2024-12-13 01:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:35.182392371 +0000 UTC m=+54.341660499" watchObservedRunningTime="2024-12-13 01:08:37.167272056 +0000 UTC m=+56.326540184" Dec 13 01:08:37.325944 containerd[1461]: time="2024-12-13T01:08:37.325804642Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:37.326642 containerd[1461]: time="2024-12-13T01:08:37.326597569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:08:37.328712 containerd[1461]: time="2024-12-13T01:08:37.328674524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 373.598317ms" Dec 13 01:08:37.328712 containerd[1461]: time="2024-12-13T01:08:37.328707987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:08:37.329339 containerd[1461]: time="2024-12-13T01:08:37.329292494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:08:37.330723 containerd[1461]: time="2024-12-13T01:08:37.330580770Z" level=info msg="CreateContainer within sandbox \"9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:08:37.346876 containerd[1461]: time="2024-12-13T01:08:37.346809398Z" level=info msg="CreateContainer within sandbox \"9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bfc948c3ea3f7ca89bc27ef48ba88fbfcefe809f3231786564a76343efb5da71\"" Dec 13 01:08:37.348968 containerd[1461]: time="2024-12-13T01:08:37.348943071Z" level=info msg="StartContainer for \"bfc948c3ea3f7ca89bc27ef48ba88fbfcefe809f3231786564a76343efb5da71\"" Dec 13 01:08:37.378537 systemd[1]: Started cri-containerd-bfc948c3ea3f7ca89bc27ef48ba88fbfcefe809f3231786564a76343efb5da71.scope - libcontainer container bfc948c3ea3f7ca89bc27ef48ba88fbfcefe809f3231786564a76343efb5da71. Dec 13 01:08:37.416950 containerd[1461]: time="2024-12-13T01:08:37.416902947Z" level=info msg="StartContainer for \"bfc948c3ea3f7ca89bc27ef48ba88fbfcefe809f3231786564a76343efb5da71\" returns successfully" Dec 13 01:08:38.191183 kubelet[2595]: I1213 01:08:38.191042 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79f8c64c55-jqxwq" podStartSLOduration=30.933089955 podStartE2EDuration="34.190986917s" podCreationTimestamp="2024-12-13 01:08:04 +0000 UTC" firstStartedPulling="2024-12-13 01:08:34.071125636 +0000 UTC m=+53.230393764" lastFinishedPulling="2024-12-13 01:08:37.329022598 +0000 UTC m=+56.488290726" observedRunningTime="2024-12-13 01:08:38.190311459 +0000 UTC m=+57.349579597" watchObservedRunningTime="2024-12-13 01:08:38.190986917 +0000 UTC m=+57.350255045" Dec 13 01:08:39.164793 kubelet[2595]: I1213 01:08:39.164758 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:08:39.685075 kubelet[2595]: E1213 01:08:39.685035 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:40.065507 containerd[1461]: time="2024-12-13T01:08:40.065332318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:40.070103 containerd[1461]: time="2024-12-13T01:08:40.070011694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:08:40.071701 containerd[1461]: time="2024-12-13T01:08:40.071629429Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:40.077957 containerd[1461]: time="2024-12-13T01:08:40.077896503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:40.078496 containerd[1461]: time="2024-12-13T01:08:40.078453057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.748943997s" Dec 13 01:08:40.078496 containerd[1461]: time="2024-12-13T01:08:40.078487401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:08:40.080593 containerd[1461]: time="2024-12-13T01:08:40.080546342Z" level=info msg="CreateContainer within sandbox \"f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:08:40.496617 containerd[1461]: time="2024-12-13T01:08:40.496545588Z" level=info msg="CreateContainer within sandbox \"f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0011f17ff01bc6262084edae5ffccef0a7a4c0e0f101b8f3abc5a9c7fc7aa0bf\"" Dec 13 01:08:40.497388 containerd[1461]: time="2024-12-13T01:08:40.497237286Z" level=info msg="StartContainer for \"0011f17ff01bc6262084edae5ffccef0a7a4c0e0f101b8f3abc5a9c7fc7aa0bf\"" Dec 13 01:08:40.531679 systemd[1]: Started cri-containerd-0011f17ff01bc6262084edae5ffccef0a7a4c0e0f101b8f3abc5a9c7fc7aa0bf.scope - libcontainer container 0011f17ff01bc6262084edae5ffccef0a7a4c0e0f101b8f3abc5a9c7fc7aa0bf. Dec 13 01:08:40.727275 containerd[1461]: time="2024-12-13T01:08:40.727211251Z" level=info msg="StartContainer for \"0011f17ff01bc6262084edae5ffccef0a7a4c0e0f101b8f3abc5a9c7fc7aa0bf\" returns successfully" Dec 13 01:08:40.728559 containerd[1461]: time="2024-12-13T01:08:40.728529556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:08:40.767269 kubelet[2595]: I1213 01:08:40.767133 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:08:40.772182 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:59456.service - OpenSSH per-connection server daemon (10.0.0.1:59456). Dec 13 01:08:40.818414 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 59456 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:40.820279 sshd[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:40.824492 systemd-logind[1444]: New session 18 of user core. Dec 13 01:08:40.830671 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:08:40.919371 containerd[1461]: time="2024-12-13T01:08:40.918790389Z" level=info msg="StopPodSandbox for \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\"" Dec 13 01:08:40.979725 sshd[5166]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:40.986790 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:59456.service: Deactivated successfully. Dec 13 01:08:40.991109 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:08:40.993512 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:08:40.994961 systemd-logind[1444]: Removed session 18. Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:40.991 [WARNING][5193] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0", GenerateName:"calico-kube-controllers-dd4b76b86-", Namespace:"calico-system", SelfLink:"", UID:"8ea07a70-019b-41be-b5b8-8680d6837b86", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd4b76b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9", Pod:"calico-kube-controllers-dd4b76b86-ckqv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb58b0b9604", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:40.992 [INFO][5193] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:40.992 [INFO][5193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" iface="eth0" netns="" Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:40.992 [INFO][5193] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:40.992 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:41.018 [INFO][5206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:41.018 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:41.018 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:41.025 [WARNING][5206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:41.025 [INFO][5206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:41.026 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.033111 containerd[1461]: 2024-12-13 01:08:41.029 [INFO][5193] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:41.033111 containerd[1461]: time="2024-12-13T01:08:41.033067017Z" level=info msg="TearDown network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\" successfully" Dec 13 01:08:41.033111 containerd[1461]: time="2024-12-13T01:08:41.033106322Z" level=info msg="StopPodSandbox for \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\" returns successfully" Dec 13 01:08:41.168643 containerd[1461]: time="2024-12-13T01:08:41.168575216Z" level=info msg="RemovePodSandbox for \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\"" Dec 13 01:08:41.171287 containerd[1461]: time="2024-12-13T01:08:41.171023901Z" level=info msg="Forcibly stopping sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\"" Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.211 [WARNING][5235] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0", GenerateName:"calico-kube-controllers-dd4b76b86-", Namespace:"calico-system", SelfLink:"", UID:"8ea07a70-019b-41be-b5b8-8680d6837b86", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd4b76b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb423f41db5e06a5641b11a3bf7960bbf9fde629ce09cdbd06049636d7151ef9", Pod:"calico-kube-controllers-dd4b76b86-ckqv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb58b0b9604", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.211 [INFO][5235] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.211 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" iface="eth0" netns="" Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.211 [INFO][5235] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.211 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.234 [INFO][5243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.234 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.234 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.239 [WARNING][5243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.239 [INFO][5243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" HandleID="k8s-pod-network.1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Workload="localhost-k8s-calico--kube--controllers--dd4b76b86--ckqv7-eth0" Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.240 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.246811 containerd[1461]: 2024-12-13 01:08:41.243 [INFO][5235] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2" Dec 13 01:08:41.247294 containerd[1461]: time="2024-12-13T01:08:41.246808308Z" level=info msg="TearDown network for sandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\" successfully" Dec 13 01:08:41.257359 containerd[1461]: time="2024-12-13T01:08:41.257302024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:08:41.257464 containerd[1461]: time="2024-12-13T01:08:41.257417053Z" level=info msg="RemovePodSandbox \"1dd2a6a65ebae747b82878427654393941168c7573d4689788cfa2627508d1a2\" returns successfully" Dec 13 01:08:41.258010 containerd[1461]: time="2024-12-13T01:08:41.257982916Z" level=info msg="StopPodSandbox for \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\"" Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.299 [WARNING][5266] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zplvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53ce73c4-d9e2-4a98-add0-afa55318cf9b", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917", Pod:"csi-node-driver-zplvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5df915fb09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.299 [INFO][5266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.299 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" iface="eth0" netns="" Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.299 [INFO][5266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.299 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.321 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.321 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.321 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.326 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.326 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.327 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.334290 containerd[1461]: 2024-12-13 01:08:41.330 [INFO][5266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:41.334290 containerd[1461]: time="2024-12-13T01:08:41.334235651Z" level=info msg="TearDown network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\" successfully" Dec 13 01:08:41.334290 containerd[1461]: time="2024-12-13T01:08:41.334274785Z" level=info msg="StopPodSandbox for \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\" returns successfully" Dec 13 01:08:41.334996 containerd[1461]: time="2024-12-13T01:08:41.334858643Z" level=info msg="RemovePodSandbox for \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\"" Dec 13 01:08:41.334996 containerd[1461]: time="2024-12-13T01:08:41.334883830Z" level=info msg="Forcibly stopping sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\"" Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.375 [WARNING][5296] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zplvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53ce73c4-d9e2-4a98-add0-afa55318cf9b", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917", Pod:"csi-node-driver-zplvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5df915fb09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.376 [INFO][5296] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.376 [INFO][5296] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" iface="eth0" netns="" Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.376 [INFO][5296] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.376 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.401 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.401 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.401 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.407 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.407 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" HandleID="k8s-pod-network.0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Workload="localhost-k8s-csi--node--driver--zplvr-eth0" Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.408 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.415040 containerd[1461]: 2024-12-13 01:08:41.411 [INFO][5296] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb" Dec 13 01:08:41.415638 containerd[1461]: time="2024-12-13T01:08:41.415094755Z" level=info msg="TearDown network for sandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\" successfully" Dec 13 01:08:41.419740 containerd[1461]: time="2024-12-13T01:08:41.419698237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:08:41.419816 containerd[1461]: time="2024-12-13T01:08:41.419756237Z" level=info msg="RemovePodSandbox \"0bfe1989872bb5e371ff9acafc074b86e52a34ca7492d80a9cac636b43e343cb\" returns successfully" Dec 13 01:08:41.420515 containerd[1461]: time="2024-12-13T01:08:41.420435195Z" level=info msg="StopPodSandbox for \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\"" Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.459 [WARNING][5326] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gcxjc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c89f9093-1b31-49a1-b329-531dacccd48c", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0", Pod:"coredns-76f75df574-gcxjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74ab1cea06e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.460 [INFO][5326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.460 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" iface="eth0" netns="" Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.460 [INFO][5326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.460 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.485 [INFO][5333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.485 [INFO][5333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.486 [INFO][5333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.491 [WARNING][5333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.491 [INFO][5333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.492 [INFO][5333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.498115 containerd[1461]: 2024-12-13 01:08:41.495 [INFO][5326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:41.498575 containerd[1461]: time="2024-12-13T01:08:41.498143943Z" level=info msg="TearDown network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\" successfully" Dec 13 01:08:41.498575 containerd[1461]: time="2024-12-13T01:08:41.498177076Z" level=info msg="StopPodSandbox for \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\" returns successfully" Dec 13 01:08:41.498888 containerd[1461]: time="2024-12-13T01:08:41.498844632Z" level=info msg="RemovePodSandbox for \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\"" Dec 13 01:08:41.498888 containerd[1461]: time="2024-12-13T01:08:41.498888144Z" level=info msg="Forcibly stopping sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\"" Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.538 [WARNING][5355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--gcxjc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c89f9093-1b31-49a1-b329-531dacccd48c", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6424592d1db635de6d231408a7d944d2869b667eba0f4e45993ed6657258c1a0", Pod:"coredns-76f75df574-gcxjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74ab1cea06e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.538 [INFO][5355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.538 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" iface="eth0" netns="" Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.538 [INFO][5355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.538 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.561 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.561 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.561 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.568 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.568 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" HandleID="k8s-pod-network.9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Workload="localhost-k8s-coredns--76f75df574--gcxjc-eth0" Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.569 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.575981 containerd[1461]: 2024-12-13 01:08:41.572 [INFO][5355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76" Dec 13 01:08:41.576541 containerd[1461]: time="2024-12-13T01:08:41.575998857Z" level=info msg="TearDown network for sandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\" successfully" Dec 13 01:08:41.580168 containerd[1461]: time="2024-12-13T01:08:41.580123272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:08:41.580260 containerd[1461]: time="2024-12-13T01:08:41.580175030Z" level=info msg="RemovePodSandbox \"9ab5f3e159745e28465b43c062cf1801061ae2866d22ce77fc9fd226e3e6ee76\" returns successfully" Dec 13 01:08:41.580755 containerd[1461]: time="2024-12-13T01:08:41.580721106Z" level=info msg="StopPodSandbox for \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\"" Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.620 [WARNING][5385] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0", GenerateName:"calico-apiserver-79f8c64c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f8c64c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca", Pod:"calico-apiserver-79f8c64c55-j6szh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4cde4d09e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.620 [INFO][5385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.620 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" iface="eth0" netns="" Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.620 [INFO][5385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.620 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.645 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.645 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.645 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.650 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.650 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.651 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.656135 containerd[1461]: 2024-12-13 01:08:41.653 [INFO][5385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:41.656569 containerd[1461]: time="2024-12-13T01:08:41.656196085Z" level=info msg="TearDown network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\" successfully" Dec 13 01:08:41.656569 containerd[1461]: time="2024-12-13T01:08:41.656230651Z" level=info msg="StopPodSandbox for \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\" returns successfully" Dec 13 01:08:41.656905 containerd[1461]: time="2024-12-13T01:08:41.656847982Z" level=info msg="RemovePodSandbox for \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\"" Dec 13 01:08:41.656905 containerd[1461]: time="2024-12-13T01:08:41.656887066Z" level=info msg="Forcibly stopping sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\"" Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.696 [WARNING][5414] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0", GenerateName:"calico-apiserver-79f8c64c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"56f972bd-9655-41a5-b4f6-4b6ac7ebcdc2", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f8c64c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2f2eca98ab9bda8dd211008244e42749044a8b4424e06ce3ceacee6088ba8ca", Pod:"calico-apiserver-79f8c64c55-j6szh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4cde4d09e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.696 [INFO][5414] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.696 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" iface="eth0" netns="" Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.696 [INFO][5414] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.696 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.720 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.720 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.721 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.726 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.726 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" HandleID="k8s-pod-network.d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Workload="localhost-k8s-calico--apiserver--79f8c64c55--j6szh-eth0" Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.728 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.734003 containerd[1461]: 2024-12-13 01:08:41.730 [INFO][5414] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9" Dec 13 01:08:41.734514 containerd[1461]: time="2024-12-13T01:08:41.734057133Z" level=info msg="TearDown network for sandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\" successfully" Dec 13 01:08:41.738117 containerd[1461]: time="2024-12-13T01:08:41.738063973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:08:41.738173 containerd[1461]: time="2024-12-13T01:08:41.738129508Z" level=info msg="RemovePodSandbox \"d56773b5594c830ea0d30afc45ea93e6a04e7ef99df13bf23e149df2fcc4f1c9\" returns successfully" Dec 13 01:08:41.738774 containerd[1461]: time="2024-12-13T01:08:41.738742871Z" level=info msg="StopPodSandbox for \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\"" Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.777 [WARNING][5445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0", GenerateName:"calico-apiserver-79f8c64c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"048e2ce0-7d8d-4a74-8789-6bbabf5e378c", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f8c64c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33", Pod:"calico-apiserver-79f8c64c55-jqxwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18fb3e71582", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.778 [INFO][5445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.778 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" iface="eth0" netns="" Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.778 [INFO][5445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.778 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.798 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.798 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.798 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.803 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.803 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.804 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.809083 containerd[1461]: 2024-12-13 01:08:41.806 [INFO][5445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:41.809539 containerd[1461]: time="2024-12-13T01:08:41.809134759Z" level=info msg="TearDown network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\" successfully" Dec 13 01:08:41.809539 containerd[1461]: time="2024-12-13T01:08:41.809161069Z" level=info msg="StopPodSandbox for \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\" returns successfully" Dec 13 01:08:41.809706 containerd[1461]: time="2024-12-13T01:08:41.809687257Z" level=info msg="RemovePodSandbox for \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\"" Dec 13 01:08:41.809739 containerd[1461]: time="2024-12-13T01:08:41.809713768Z" level=info msg="Forcibly stopping sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\"" Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.845 [WARNING][5475] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0", GenerateName:"calico-apiserver-79f8c64c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"048e2ce0-7d8d-4a74-8789-6bbabf5e378c", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f8c64c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9abffd60dacacba9f717bda1ec35637658c8182c1afa679392546868f7b4bb33", Pod:"calico-apiserver-79f8c64c55-jqxwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18fb3e71582", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.845 [INFO][5475] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.845 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" iface="eth0" netns="" Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.845 [INFO][5475] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.845 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.866 [INFO][5482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.866 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.866 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.876 [WARNING][5482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.876 [INFO][5482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" HandleID="k8s-pod-network.13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Workload="localhost-k8s-calico--apiserver--79f8c64c55--jqxwq-eth0" Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.878 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.883834 containerd[1461]: 2024-12-13 01:08:41.881 [INFO][5475] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af" Dec 13 01:08:41.884533 containerd[1461]: time="2024-12-13T01:08:41.884442331Z" level=info msg="TearDown network for sandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\" successfully" Dec 13 01:08:41.899120 containerd[1461]: time="2024-12-13T01:08:41.898700348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:08:41.899120 containerd[1461]: time="2024-12-13T01:08:41.898822961Z" level=info msg="RemovePodSandbox \"13a44417730428430e4c7d2b10df879394ed4de72018127ceec88b59494215af\" returns successfully" Dec 13 01:08:41.900624 containerd[1461]: time="2024-12-13T01:08:41.900598098Z" level=info msg="StopPodSandbox for \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\"" Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.937 [WARNING][5506] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--57lkc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6cab1421-2490-4e1e-a106-3059cdb91580", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6", Pod:"coredns-76f75df574-57lkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a45482d3f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.938 [INFO][5506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.938 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" iface="eth0" netns="" Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.938 [INFO][5506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.938 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.956 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.956 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.956 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.961 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.961 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.962 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:41.967791 containerd[1461]: 2024-12-13 01:08:41.964 [INFO][5506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:41.967791 containerd[1461]: time="2024-12-13T01:08:41.967664598Z" level=info msg="TearDown network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\" successfully" Dec 13 01:08:41.967791 containerd[1461]: time="2024-12-13T01:08:41.967696789Z" level=info msg="StopPodSandbox for \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\" returns successfully" Dec 13 01:08:41.968531 containerd[1461]: time="2024-12-13T01:08:41.968217066Z" level=info msg="RemovePodSandbox for \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\"" Dec 13 01:08:41.968531 containerd[1461]: time="2024-12-13T01:08:41.968245169Z" level=info msg="Forcibly stopping sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\"" Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.004 [WARNING][5535] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--57lkc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6cab1421-2490-4e1e-a106-3059cdb91580", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfdd998336f46fa377e1949e7c4bf7d4aa3efb76bb55f71a505ec20a71eb0ca6", Pod:"coredns-76f75df574-57lkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a45482d3f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.005 [INFO][5535] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.005 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" iface="eth0" netns="" Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.005 [INFO][5535] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.005 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.030 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.030 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.030 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.037 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.037 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" HandleID="k8s-pod-network.93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Workload="localhost-k8s-coredns--76f75df574--57lkc-eth0" Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.040 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:08:42.047194 containerd[1461]: 2024-12-13 01:08:42.044 [INFO][5535] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e" Dec 13 01:08:42.048437 containerd[1461]: time="2024-12-13T01:08:42.047245920Z" level=info msg="TearDown network for sandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\" successfully" Dec 13 01:08:42.299616 containerd[1461]: time="2024-12-13T01:08:42.299284617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:08:42.299616 containerd[1461]: time="2024-12-13T01:08:42.299374939Z" level=info msg="RemovePodSandbox \"93673e3c8a5f05b9d191b96ec673db82f789c1aa0a0a66b26631b2ff00758b7e\" returns successfully" Dec 13 01:08:42.646414 containerd[1461]: time="2024-12-13T01:08:42.646354845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:42.700615 containerd[1461]: time="2024-12-13T01:08:42.700535599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:08:42.740383 containerd[1461]: time="2024-12-13T01:08:42.740335023Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:42.775589 containerd[1461]: time="2024-12-13T01:08:42.775539652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:42.776212 containerd[1461]: time="2024-12-13T01:08:42.776171910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.047606857s" Dec 13 01:08:42.776212 containerd[1461]: time="2024-12-13T01:08:42.776206506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:08:42.777816 containerd[1461]: time="2024-12-13T01:08:42.777786382Z" level=info msg="CreateContainer within sandbox \"f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:08:42.997482 containerd[1461]: time="2024-12-13T01:08:42.997338083Z" level=info msg="CreateContainer within sandbox \"f705756ca7bedef929690c219214fb974cca610c4649bd8f1c039791c620a917\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cc83938bc764e688642cc0e93c7f040a7021eb232695d4a59232a70d2cf48882\"" Dec 13 01:08:42.997995 containerd[1461]: time="2024-12-13T01:08:42.997957668Z" level=info msg="StartContainer for \"cc83938bc764e688642cc0e93c7f040a7021eb232695d4a59232a70d2cf48882\"" Dec 13 01:08:43.045741 systemd[1]: run-containerd-runc-k8s.io-cc83938bc764e688642cc0e93c7f040a7021eb232695d4a59232a70d2cf48882-runc.ug0BVR.mount: Deactivated successfully. Dec 13 01:08:43.056587 systemd[1]: Started cri-containerd-cc83938bc764e688642cc0e93c7f040a7021eb232695d4a59232a70d2cf48882.scope - libcontainer container cc83938bc764e688642cc0e93c7f040a7021eb232695d4a59232a70d2cf48882. Dec 13 01:08:43.137109 containerd[1461]: time="2024-12-13T01:08:43.137066314Z" level=info msg="StartContainer for \"cc83938bc764e688642cc0e93c7f040a7021eb232695d4a59232a70d2cf48882\" returns successfully" Dec 13 01:08:43.269734 kubelet[2595]: I1213 01:08:43.269591 2595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-zplvr" podStartSLOduration=30.580816806 podStartE2EDuration="39.269542575s" podCreationTimestamp="2024-12-13 01:08:04 +0000 UTC" firstStartedPulling="2024-12-13 01:08:34.087715052 +0000 UTC m=+53.246983180" lastFinishedPulling="2024-12-13 01:08:42.776440821 +0000 UTC m=+61.935708949" observedRunningTime="2024-12-13 01:08:43.269187753 +0000 UTC m=+62.428455901" watchObservedRunningTime="2024-12-13 01:08:43.269542575 +0000 UTC m=+62.428810703" Dec 13 01:08:44.037765 kubelet[2595]: I1213 01:08:44.037732 2595 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:08:44.039065 kubelet[2595]: I1213 01:08:44.039020 2595 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:08:45.997334 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:59466.service - OpenSSH per-connection server daemon (10.0.0.1:59466). Dec 13 01:08:46.041970 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 59466 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:46.044030 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:46.049025 systemd-logind[1444]: New session 19 of user core. Dec 13 01:08:46.059566 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:08:46.179832 sshd[5601]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:46.192566 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:59466.service: Deactivated successfully. Dec 13 01:08:46.194480 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:08:46.196096 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:08:46.203633 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:59474.service - OpenSSH per-connection server daemon (10.0.0.1:59474). Dec 13 01:08:46.204482 systemd-logind[1444]: Removed session 19. Dec 13 01:08:46.235230 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 59474 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:46.237797 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:46.244595 systemd-logind[1444]: New session 20 of user core. Dec 13 01:08:46.250676 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:08:46.664111 sshd[5615]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:46.676029 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:59474.service: Deactivated successfully. Dec 13 01:08:46.678328 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:08:46.680054 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:08:46.687918 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:59486.service - OpenSSH per-connection server daemon (10.0.0.1:59486). Dec 13 01:08:46.689175 systemd-logind[1444]: Removed session 20. Dec 13 01:08:46.721278 sshd[5627]: Accepted publickey for core from 10.0.0.1 port 59486 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:46.722979 sshd[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:46.727374 systemd-logind[1444]: New session 21 of user core. Dec 13 01:08:46.737527 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:08:46.841617 kubelet[2595]: I1213 01:08:46.841551 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:08:48.572071 sshd[5627]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:48.584760 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:59486.service: Deactivated successfully. Dec 13 01:08:48.590817 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:08:48.595713 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:08:48.613017 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:46076.service - OpenSSH per-connection server daemon (10.0.0.1:46076). Dec 13 01:08:48.613748 systemd-logind[1444]: Removed session 21. Dec 13 01:08:48.644814 sshd[5649]: Accepted publickey for core from 10.0.0.1 port 46076 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:48.646574 sshd[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:48.650974 systemd-logind[1444]: New session 22 of user core. Dec 13 01:08:48.661645 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:08:48.990303 sshd[5649]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:49.001084 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:46076.service: Deactivated successfully. Dec 13 01:08:49.003561 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:08:49.005148 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:08:49.011872 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:46086.service - OpenSSH per-connection server daemon (10.0.0.1:46086). Dec 13 01:08:49.012962 systemd-logind[1444]: Removed session 22. Dec 13 01:08:49.046306 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 46086 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:49.047956 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:49.054666 systemd-logind[1444]: New session 23 of user core. Dec 13 01:08:49.060579 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:08:49.168183 sshd[5662]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:49.173005 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:46086.service: Deactivated successfully. Dec 13 01:08:49.175451 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:08:49.176147 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:08:49.177382 systemd-logind[1444]: Removed session 23. Dec 13 01:08:54.185647 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:46092.service - OpenSSH per-connection server daemon (10.0.0.1:46092). Dec 13 01:08:54.222495 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 46092 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:54.224235 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:54.228734 systemd-logind[1444]: New session 24 of user core. Dec 13 01:08:54.243529 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:08:54.376129 sshd[5705]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:54.379653 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:46092.service: Deactivated successfully. Dec 13 01:08:54.381688 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:08:54.383173 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:08:54.384373 systemd-logind[1444]: Removed session 24. Dec 13 01:08:59.388814 systemd[1]: Started sshd@24-10.0.0.43:22-10.0.0.1:46132.service - OpenSSH per-connection server daemon (10.0.0.1:46132). Dec 13 01:08:59.423847 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 46132 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:59.425464 sshd[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:59.429267 systemd-logind[1444]: New session 25 of user core. Dec 13 01:08:59.439530 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:08:59.553742 sshd[5724]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:59.558353 systemd[1]: sshd@24-10.0.0.43:22-10.0.0.1:46132.service: Deactivated successfully. Dec 13 01:08:59.560527 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:08:59.561228 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:08:59.562159 systemd-logind[1444]: Removed session 25. Dec 13 01:08:59.956809 kubelet[2595]: E1213 01:08:59.956761 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:59.957672 kubelet[2595]: E1213 01:08:59.956991 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:04.569346 systemd[1]: Started sshd@25-10.0.0.43:22-10.0.0.1:46142.service - OpenSSH per-connection server daemon (10.0.0.1:46142). Dec 13 01:09:04.606224 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 46142 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:04.607723 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:04.611665 systemd-logind[1444]: New session 26 of user core. Dec 13 01:09:04.621521 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:09:04.733545 sshd[5739]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:04.737175 systemd[1]: sshd@25-10.0.0.43:22-10.0.0.1:46142.service: Deactivated successfully. Dec 13 01:09:04.739383 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:09:04.740104 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:09:04.741088 systemd-logind[1444]: Removed session 26. Dec 13 01:09:09.750487 systemd[1]: Started sshd@26-10.0.0.43:22-10.0.0.1:42566.service - OpenSSH per-connection server daemon (10.0.0.1:42566). Dec 13 01:09:09.791109 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 42566 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:09.793024 sshd[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:09.797818 systemd-logind[1444]: New session 27 of user core. Dec 13 01:09:09.806621 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:09:09.956216 kubelet[2595]: E1213 01:09:09.956164 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:09.957111 sshd[5774]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:09.961952 systemd[1]: sshd@26-10.0.0.43:22-10.0.0.1:42566.service: Deactivated successfully. Dec 13 01:09:09.964663 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:09:09.965443 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:09:09.966445 systemd-logind[1444]: Removed session 27.